Jul 11 00:16:39.905053 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jul 10 22:18:23 -00 2025 Jul 11 00:16:39.905122 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bb76c73bf3935f7fa0665d7beff518d75bfa5b173769c8a2e5d3c0cf9e54372 Jul 11 00:16:39.905134 kernel: BIOS-provided physical RAM map: Jul 11 00:16:39.905144 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 11 00:16:39.905152 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 11 00:16:39.905161 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 11 00:16:39.905172 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 11 00:16:39.905185 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 11 00:16:39.905198 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 11 00:16:39.905212 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 11 00:16:39.905220 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 11 00:16:39.905229 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 11 00:16:39.905238 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 11 00:16:39.905247 kernel: NX (Execute Disable) protection: active Jul 11 00:16:39.905262 kernel: APIC: Static calls initialized Jul 11 00:16:39.905272 kernel: SMBIOS 2.8 present. Jul 11 00:16:39.905287 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 11 00:16:39.905297 kernel: DMI: Memory slots populated: 1/1 Jul 11 00:16:39.905307 kernel: Hypervisor detected: KVM Jul 11 00:16:39.905316 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 11 00:16:39.905326 kernel: kvm-clock: using sched offset of 5363228599 cycles Jul 11 00:16:39.905337 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 11 00:16:39.905347 kernel: tsc: Detected 2794.748 MHz processor Jul 11 00:16:39.905361 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 11 00:16:39.905372 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 11 00:16:39.905383 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 11 00:16:39.905393 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 11 00:16:39.905404 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 11 00:16:39.905414 kernel: Using GB pages for direct mapping Jul 11 00:16:39.905424 kernel: ACPI: Early table checksum verification disabled Jul 11 00:16:39.905434 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 11 00:16:39.905444 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:39.905458 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:39.905468 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:39.905478 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 11 00:16:39.905488 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:39.905499 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:39.905509 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:39.905519 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:39.905530 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 11 00:16:39.905548 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 11 00:16:39.905558 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 11 00:16:39.905569 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 11 00:16:39.905580 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 11 00:16:39.905591 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 11 00:16:39.905602 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 11 00:16:39.905615 kernel: No NUMA configuration found Jul 11 00:16:39.905626 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 11 00:16:39.905637 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jul 11 00:16:39.905648 kernel: Zone ranges: Jul 11 00:16:39.905659 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 11 00:16:39.905669 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 11 00:16:39.905680 kernel: Normal empty Jul 11 00:16:39.905691 kernel: Device empty Jul 11 00:16:39.905718 kernel: Movable zone start for each node Jul 11 00:16:39.905729 kernel: Early memory node ranges Jul 11 00:16:39.905744 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 11 00:16:39.905755 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 11 00:16:39.905766 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 11 00:16:39.905777 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 00:16:39.905787 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 11 00:16:39.905798 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 11 00:16:39.905809 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 11 00:16:39.905824 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 11 00:16:39.905834 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 11 00:16:39.905848 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 11 00:16:39.905859 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 11 00:16:39.905872 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 11 00:16:39.905883 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 11 00:16:39.905894 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 11 00:16:39.905905 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 11 00:16:39.905916 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 11 00:16:39.905927 kernel: TSC deadline timer available Jul 11 00:16:39.905937 kernel: CPU topo: Max. logical packages: 1 Jul 11 00:16:39.905951 kernel: CPU topo: Max. logical dies: 1 Jul 11 00:16:39.905961 kernel: CPU topo: Max. dies per package: 1 Jul 11 00:16:39.905972 kernel: CPU topo: Max. threads per core: 1 Jul 11 00:16:39.905983 kernel: CPU topo: Num. cores per package: 4 Jul 11 00:16:39.905993 kernel: CPU topo: Num. threads per package: 4 Jul 11 00:16:39.906004 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 11 00:16:39.906015 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 11 00:16:39.906036 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 11 00:16:39.906047 kernel: kvm-guest: setup PV sched yield Jul 11 00:16:39.906058 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 11 00:16:39.906071 kernel: Booting paravirtualized kernel on KVM Jul 11 00:16:39.906082 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 11 00:16:39.906093 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 11 00:16:39.906105 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 11 00:16:39.906115 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 11 00:16:39.906126 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 11 00:16:39.906136 kernel: kvm-guest: PV spinlocks enabled Jul 11 00:16:39.906147 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 11 00:16:39.906160 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bb76c73bf3935f7fa0665d7beff518d75bfa5b173769c8a2e5d3c0cf9e54372 Jul 11 00:16:39.906174 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:16:39.906185 kernel: random: crng init done Jul 11 00:16:39.906196 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:16:39.906206 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:16:39.906217 kernel: Fallback order for Node 0: 0 Jul 11 00:16:39.906228 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jul 11 00:16:39.906239 kernel: Policy zone: DMA32 Jul 11 00:16:39.906249 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:16:39.906263 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:16:39.906274 kernel: ftrace: allocating 40095 entries in 157 pages Jul 11 00:16:39.906285 kernel: ftrace: allocated 157 pages with 5 groups Jul 11 00:16:39.906295 kernel: Dynamic Preempt: voluntary Jul 11 00:16:39.906305 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:16:39.906316 kernel: rcu: RCU event tracing is enabled. Jul 11 00:16:39.906327 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:16:39.906337 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:16:39.906351 kernel: Rude variant of Tasks RCU enabled. Jul 11 00:16:39.906377 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:16:39.906387 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:16:39.906398 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:16:39.906418 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:16:39.906429 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:16:39.906439 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:16:39.906449 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 11 00:16:39.906460 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 00:16:39.906484 kernel: Console: colour VGA+ 80x25 Jul 11 00:16:39.906499 kernel: printk: legacy console [ttyS0] enabled Jul 11 00:16:39.906508 kernel: ACPI: Core revision 20240827 Jul 11 00:16:39.906519 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 11 00:16:39.906533 kernel: APIC: Switch to symmetric I/O mode setup Jul 11 00:16:39.906544 kernel: x2apic enabled Jul 11 00:16:39.906559 kernel: APIC: Switched APIC routing to: physical x2apic Jul 11 00:16:39.906570 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 11 00:16:39.906582 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 11 00:16:39.906597 kernel: kvm-guest: setup PV IPIs Jul 11 00:16:39.906608 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 11 00:16:39.906619 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 11 00:16:39.906631 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 11 00:16:39.906642 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 11 00:16:39.906653 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 11 00:16:39.906664 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 11 00:16:39.906675 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 11 00:16:39.906690 kernel: Spectre V2 : Mitigation: Retpolines Jul 11 00:16:39.906722 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 11 00:16:39.906734 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 11 00:16:39.906745 kernel: RETBleed: Mitigation: untrained return thunk Jul 11 00:16:39.906756 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 11 00:16:39.906766 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 11 00:16:39.906777 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 11 00:16:39.906789 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 11 00:16:39.906800 kernel: x86/bugs: return thunk changed Jul 11 00:16:39.906815 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 11 00:16:39.906825 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 11 00:16:39.906836 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 11 00:16:39.906846 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 11 00:16:39.906857 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 11 00:16:39.906867 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 11 00:16:39.906879 kernel: Freeing SMP alternatives memory: 32K Jul 11 00:16:39.906889 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:16:39.906904 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 11 00:16:39.906915 kernel: landlock: Up and running. Jul 11 00:16:39.906926 kernel: SELinux: Initializing. Jul 11 00:16:39.906937 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:16:39.906952 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:16:39.906963 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 11 00:16:39.906975 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 11 00:16:39.906985 kernel: ... version: 0 Jul 11 00:16:39.906996 kernel: ... bit width: 48 Jul 11 00:16:39.907009 kernel: ... generic registers: 6 Jul 11 00:16:39.907028 kernel: ... value mask: 0000ffffffffffff Jul 11 00:16:39.907039 kernel: ... max period: 00007fffffffffff Jul 11 00:16:39.907049 kernel: ... fixed-purpose events: 0 Jul 11 00:16:39.907059 kernel: ... event mask: 000000000000003f Jul 11 00:16:39.907070 kernel: signal: max sigframe size: 1776 Jul 11 00:16:39.907081 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:16:39.907093 kernel: rcu: Max phase no-delay instances is 400. Jul 11 00:16:39.907104 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 11 00:16:39.907115 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:16:39.907130 kernel: smpboot: x86: Booting SMP configuration: Jul 11 00:16:39.907141 kernel: .... node #0, CPUs: #1 #2 #3 Jul 11 00:16:39.907152 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:16:39.907164 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 11 00:16:39.907175 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 136904K reserved, 0K cma-reserved) Jul 11 00:16:39.907186 kernel: devtmpfs: initialized Jul 11 00:16:39.907197 kernel: x86/mm: Memory block size: 128MB Jul 11 00:16:39.907208 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:16:39.907220 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:16:39.907234 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:16:39.907246 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:16:39.907257 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:16:39.907268 kernel: audit: type=2000 audit(1752192995.956:1): state=initialized audit_enabled=0 res=1 Jul 11 00:16:39.907279 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:16:39.907290 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 11 00:16:39.907302 kernel: cpuidle: using governor menu Jul 11 00:16:39.907313 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:16:39.907324 kernel: dca service started, version 1.12.1 Jul 11 00:16:39.907338 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 11 00:16:39.907349 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 11 00:16:39.907360 kernel: PCI: Using configuration type 1 for base access Jul 11 00:16:39.907372 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 11 00:16:39.907383 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:16:39.907395 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 00:16:39.907406 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:16:39.907417 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 00:16:39.907431 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:16:39.907442 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:16:39.907453 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:16:39.907464 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:16:39.907476 kernel: ACPI: Interpreter enabled Jul 11 00:16:39.907487 kernel: ACPI: PM: (supports S0 S3 S5) Jul 11 00:16:39.907498 kernel: ACPI: Using IOAPIC for interrupt routing Jul 11 00:16:39.907509 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 11 00:16:39.907521 kernel: PCI: Using E820 reservations for host bridge windows Jul 11 00:16:39.907532 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 11 00:16:39.907546 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:16:39.907945 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:16:39.908122 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 11 00:16:39.908285 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 11 00:16:39.908301 kernel: PCI host bridge to bus 0000:00 Jul 11 00:16:39.908471 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 11 00:16:39.908630 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 11 00:16:39.908840 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 11 00:16:39.908987 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 11 00:16:39.909144 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 11 00:16:39.909287 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 11 00:16:39.909437 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:16:39.909632 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 11 00:16:39.909840 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 11 00:16:39.910001 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 11 00:16:39.910170 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 11 00:16:39.910327 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 11 00:16:39.910484 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 11 00:16:39.910664 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 11 00:16:39.910850 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jul 11 00:16:39.911010 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 11 00:16:39.911181 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 11 00:16:39.911364 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 11 00:16:39.911523 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jul 11 00:16:39.911681 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 11 00:16:39.911862 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 11 00:16:39.912058 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 11 00:16:39.912226 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jul 11 00:16:39.912384 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jul 11 00:16:39.912539 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 11 00:16:39.912713 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 11 00:16:39.912899 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 11 00:16:39.913073 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 11 00:16:39.913354 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 11 00:16:39.913517 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jul 11 00:16:39.913676 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jul 11 00:16:39.913890 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 11 00:16:39.914072 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 11 00:16:39.914089 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 11 00:16:39.914101 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 11 00:16:39.914118 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 11 00:16:39.914130 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 11 00:16:39.914141 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 11 00:16:39.914153 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 11 00:16:39.914163 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 11 00:16:39.914175 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 11 00:16:39.914186 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 11 00:16:39.914197 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 11 00:16:39.914209 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 11 00:16:39.914224 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 11 00:16:39.914234 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 11 00:16:39.914246 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 11 00:16:39.914258 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 11 00:16:39.914270 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 11 00:16:39.914283 kernel: iommu: Default domain type: Translated Jul 11 00:16:39.914295 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 11 00:16:39.914306 kernel: PCI: Using ACPI for IRQ routing Jul 11 00:16:39.914317 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 11 00:16:39.914331 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 11 00:16:39.914343 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 11 00:16:39.914505 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 11 00:16:39.914666 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 11 00:16:39.914848 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 11 00:16:39.914865 kernel: vgaarb: loaded Jul 11 00:16:39.914876 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 11 00:16:39.914888 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 11 00:16:39.914905 kernel: clocksource: Switched to clocksource kvm-clock Jul 11 00:16:39.914916 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:16:39.914928 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:16:39.914939 kernel: pnp: PnP ACPI init Jul 11 00:16:39.915137 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 11 00:16:39.915156 kernel: pnp: PnP ACPI: found 6 devices Jul 11 00:16:39.915168 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 11 00:16:39.915180 kernel: NET: Registered PF_INET protocol family Jul 11 00:16:39.915191 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:16:39.915208 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:16:39.915219 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:16:39.915230 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:16:39.915242 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 00:16:39.915253 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:16:39.915264 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:16:39.915276 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:16:39.915287 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:16:39.915302 kernel: NET: Registered PF_XDP protocol family Jul 11 00:16:39.915453 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 11 00:16:39.915601 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 11 00:16:39.915770 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 11 00:16:39.915916 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 11 00:16:39.916075 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 11 00:16:39.916203 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 11 00:16:39.916216 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:16:39.916226 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 11 00:16:39.916241 kernel: Initialise system trusted keyrings Jul 11 00:16:39.916250 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:16:39.916260 kernel: Key type asymmetric registered Jul 11 00:16:39.916270 kernel: Asymmetric key parser 'x509' registered Jul 11 00:16:39.916280 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 11 00:16:39.916290 kernel: io scheduler mq-deadline registered Jul 11 00:16:39.916300 kernel: io scheduler kyber registered Jul 11 00:16:39.916311 kernel: io scheduler bfq registered Jul 11 00:16:39.916322 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 11 00:16:39.916339 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 11 00:16:39.916350 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 11 00:16:39.916362 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 11 00:16:39.916373 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:16:39.916385 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 11 00:16:39.916396 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 11 00:16:39.916407 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 11 00:16:39.916419 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 11 00:16:39.916607 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 11 00:16:39.916629 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 11 00:16:39.916837 kernel: rtc_cmos 00:04: registered as rtc0 Jul 11 00:16:39.916996 kernel: rtc_cmos 00:04: setting system clock to 2025-07-11T00:16:39 UTC (1752192999) Jul 11 00:16:39.917163 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 11 00:16:39.917200 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 11 00:16:39.917213 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:16:39.917224 kernel: Segment Routing with IPv6 Jul 11 00:16:39.917235 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:16:39.917250 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:16:39.917260 kernel: Key type dns_resolver registered Jul 11 00:16:39.917270 kernel: IPI shorthand broadcast: enabled Jul 11 00:16:39.917282 kernel: sched_clock: Marking stable (3594005523, 138573903)->(3786749357, -54169931) Jul 11 00:16:39.917297 kernel: registered taskstats version 1 Jul 11 00:16:39.917308 kernel: Loading compiled-in X.509 certificates Jul 11 00:16:39.917319 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: e2778f992738e32ced6c6a485d2ed31f29141742' Jul 11 00:16:39.917330 kernel: Demotion targets for Node 0: null Jul 11 00:16:39.917340 kernel: Key type .fscrypt registered Jul 11 00:16:39.917354 kernel: Key type fscrypt-provisioning registered Jul 11 00:16:39.917365 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:16:39.917375 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:16:39.917386 kernel: ima: No architecture policies found Jul 11 00:16:39.917397 kernel: clk: Disabling unused clocks Jul 11 00:16:39.917408 kernel: Warning: unable to open an initial console. Jul 11 00:16:39.917420 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 11 00:16:39.917431 kernel: Write protecting the kernel read-only data: 24576k Jul 11 00:16:39.917446 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 11 00:16:39.917458 kernel: Run /init as init process Jul 11 00:16:39.917469 kernel: with arguments: Jul 11 00:16:39.917480 kernel: /init Jul 11 00:16:39.917492 kernel: with environment: Jul 11 00:16:39.917503 kernel: HOME=/ Jul 11 00:16:39.917513 kernel: TERM=linux Jul 11 00:16:39.917524 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:16:39.917542 systemd[1]: Successfully made /usr/ read-only. Jul 11 00:16:39.917563 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 00:16:39.917592 systemd[1]: Detected virtualization kvm. Jul 11 00:16:39.917605 systemd[1]: Detected architecture x86-64. Jul 11 00:16:39.917616 systemd[1]: Running in initrd. Jul 11 00:16:39.917628 systemd[1]: No hostname configured, using default hostname. Jul 11 00:16:39.917644 systemd[1]: Hostname set to . Jul 11 00:16:39.917656 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:16:39.917669 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:16:39.917681 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:16:39.917712 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:16:39.917727 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 00:16:39.917739 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:16:39.917751 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 00:16:39.917769 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 00:16:39.917782 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 00:16:39.917793 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 00:16:39.917804 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:16:39.917815 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:16:39.917827 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:16:39.917839 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:16:39.917857 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:16:39.917873 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:16:39.917887 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:16:39.917898 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:16:39.917911 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:16:39.917929 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 11 00:16:39.917940 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:16:39.917952 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:16:39.917965 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:16:39.917981 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:16:39.917993 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 00:16:39.918006 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:16:39.918028 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 00:16:39.918042 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 11 00:16:39.918059 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:16:39.918072 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:16:39.918085 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:16:39.918097 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:16:39.918110 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 00:16:39.918123 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:16:39.918139 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:16:39.918152 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:16:39.918196 systemd-journald[222]: Collecting audit messages is disabled. Jul 11 00:16:39.918230 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:16:39.918245 systemd-journald[222]: Journal started Jul 11 00:16:39.918274 systemd-journald[222]: Runtime Journal (/run/log/journal/67049e000a354e16a3edfcfe6dfda78e) is 6M, max 48.6M, 42.5M free. Jul 11 00:16:39.903501 systemd-modules-load[223]: Inserted module 'overlay' Jul 11 00:16:39.955734 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:16:39.955781 kernel: Bridge firewalling registered Jul 11 00:16:39.935782 systemd-modules-load[223]: Inserted module 'br_netfilter' Jul 11 00:16:39.958839 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:16:39.959421 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:16:39.962172 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:16:39.968846 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:16:39.973250 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:16:39.992145 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:16:39.993213 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:16:40.006483 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:16:40.007542 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:16:40.008069 systemd-tmpfiles[243]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 11 00:16:40.010319 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:16:40.015724 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 00:16:40.017226 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:16:40.033929 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:16:40.051046 dracut-cmdline[258]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bb76c73bf3935f7fa0665d7beff518d75bfa5b173769c8a2e5d3c0cf9e54372 Jul 11 00:16:40.103496 systemd-resolved[262]: Positive Trust Anchors: Jul 11 00:16:40.103514 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:16:40.103552 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:16:40.106530 systemd-resolved[262]: Defaulting to hostname 'linux'. Jul 11 00:16:40.108243 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:16:40.116800 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:16:40.211752 kernel: SCSI subsystem initialized Jul 11 00:16:40.222743 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:16:40.236807 kernel: iscsi: registered transport (tcp) Jul 11 00:16:40.266810 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:16:40.266920 kernel: QLogic iSCSI HBA Driver Jul 11 00:16:40.296299 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:16:40.335127 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:16:40.336759 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:16:40.438676 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 00:16:40.444417 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 00:16:40.513766 kernel: raid6: avx2x4 gen() 21446 MB/s Jul 11 00:16:40.545749 kernel: raid6: avx2x2 gen() 20296 MB/s Jul 11 00:16:40.563110 kernel: raid6: avx2x1 gen() 17153 MB/s Jul 11 00:16:40.563164 kernel: raid6: using algorithm avx2x4 gen() 21446 MB/s Jul 11 00:16:40.581070 kernel: raid6: .... xor() 5786 MB/s, rmw enabled Jul 11 00:16:40.581163 kernel: raid6: using avx2x2 recovery algorithm Jul 11 00:16:40.611763 kernel: xor: automatically using best checksumming function avx Jul 11 00:16:40.803787 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 00:16:40.814624 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:16:40.819661 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:16:40.876524 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 11 00:16:40.884734 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:16:40.889673 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 00:16:40.924876 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Jul 11 00:16:40.963327 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:16:40.967074 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:16:41.058899 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:16:41.061163 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 00:16:41.114494 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 11 00:16:41.114801 kernel: cryptd: max_cpu_qlen set to 1000 Jul 11 00:16:41.119768 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 11 00:16:41.131738 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:16:41.132892 kernel: AES CTR mode by8 optimization enabled Jul 11 00:16:41.140168 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:16:41.140233 kernel: GPT:9289727 != 19775487 Jul 11 00:16:41.140245 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:16:41.142339 kernel: GPT:9289727 != 19775487 Jul 11 00:16:41.142403 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:16:41.142415 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:16:41.153785 kernel: libata version 3.00 loaded. Jul 11 00:16:41.169441 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:16:41.171230 kernel: ahci 0000:00:1f.2: version 3.0 Jul 11 00:16:41.171425 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 11 00:16:41.169740 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:16:41.173850 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:16:41.178573 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 11 00:16:41.178768 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 11 00:16:41.178913 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 11 00:16:41.175730 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:16:41.180924 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 11 00:16:41.186750 kernel: scsi host0: ahci Jul 11 00:16:41.187729 kernel: scsi host1: ahci Jul 11 00:16:41.189778 kernel: scsi host2: ahci Jul 11 00:16:41.191724 kernel: scsi host3: ahci Jul 11 00:16:41.194742 kernel: scsi host4: ahci Jul 11 00:16:41.215802 kernel: scsi host5: ahci Jul 11 00:16:41.221897 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jul 11 00:16:41.221937 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jul 11 00:16:41.221949 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jul 11 00:16:41.221960 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jul 11 00:16:41.221970 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jul 11 00:16:41.221998 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jul 11 00:16:41.225061 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 00:16:41.239671 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:16:41.278132 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 00:16:41.278499 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:16:41.295515 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 00:16:41.297024 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 00:16:41.301787 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 00:16:41.368086 disk-uuid[633]: Primary Header is updated. Jul 11 00:16:41.368086 disk-uuid[633]: Secondary Entries is updated. Jul 11 00:16:41.368086 disk-uuid[633]: Secondary Header is updated. Jul 11 00:16:41.378729 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:16:41.384746 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:16:41.529675 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 11 00:16:41.529800 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 11 00:16:41.529811 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 11 00:16:41.530773 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 11 00:16:41.531751 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 11 00:16:41.532748 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 11 00:16:41.532774 kernel: ata3.00: applying bridge limits Jul 11 00:16:41.533742 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 11 00:16:41.534740 kernel: ata3.00: configured for UDMA/100 Jul 11 00:16:41.536748 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 11 00:16:41.588913 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 11 00:16:41.589294 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 11 00:16:41.608739 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 11 00:16:41.919350 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 00:16:41.942370 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:16:41.945306 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:16:41.948004 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:16:41.951740 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 00:16:41.987813 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:16:42.402732 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:16:42.402979 disk-uuid[634]: The operation has completed successfully. Jul 11 00:16:42.434025 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:16:42.434155 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 00:16:42.479234 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 00:16:42.512642 sh[664]: Success Jul 11 00:16:42.532211 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:16:42.532313 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:16:42.533418 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 11 00:16:42.544726 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 11 00:16:42.591819 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 00:16:42.604620 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 00:16:42.609830 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 00:16:42.629230 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 11 00:16:42.629283 kernel: BTRFS: device fsid 3f9b7830-c6a3-4ecb-9c03-fbe92ab5c328 devid 1 transid 42 /dev/mapper/usr (253:0) scanned by mount (676) Jul 11 00:16:42.630639 kernel: BTRFS info (device dm-0): first mount of filesystem 3f9b7830-c6a3-4ecb-9c03-fbe92ab5c328 Jul 11 00:16:42.630735 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:16:42.632145 kernel: BTRFS info (device dm-0): using free-space-tree Jul 11 00:16:42.658603 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 00:16:42.661139 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 11 00:16:42.663603 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 00:16:42.666735 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 00:16:42.669714 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 00:16:42.697752 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (709) Jul 11 00:16:42.700347 kernel: BTRFS info (device vda6): first mount of filesystem 047d5cfa-d847-4e53-8f92-c8766cefdad0 Jul 11 00:16:42.700413 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:16:42.700425 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 00:16:42.709727 kernel: BTRFS info (device vda6): last unmount of filesystem 047d5cfa-d847-4e53-8f92-c8766cefdad0 Jul 11 00:16:42.711507 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 00:16:42.713066 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 00:16:42.852518 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:16:42.856540 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:16:42.863640 ignition[754]: Ignition 2.21.0 Jul 11 00:16:42.863655 ignition[754]: Stage: fetch-offline Jul 11 00:16:42.863692 ignition[754]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:16:42.863725 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:16:42.863825 ignition[754]: parsed url from cmdline: "" Jul 11 00:16:42.863830 ignition[754]: no config URL provided Jul 11 00:16:42.863836 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:16:42.863847 ignition[754]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:16:42.863875 ignition[754]: op(1): [started] loading QEMU firmware config module Jul 11 00:16:42.863881 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:16:42.882417 ignition[754]: op(1): [finished] loading QEMU firmware config module Jul 11 00:16:42.905175 systemd-networkd[851]: lo: Link UP Jul 11 00:16:42.905186 systemd-networkd[851]: lo: Gained carrier Jul 11 00:16:42.907105 systemd-networkd[851]: Enumeration completed Jul 11 00:16:42.907245 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:16:42.907555 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:16:42.907561 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:16:42.908795 systemd-networkd[851]: eth0: Link UP Jul 11 00:16:42.908799 systemd-networkd[851]: eth0: Gained carrier Jul 11 00:16:42.908810 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:16:42.909167 systemd[1]: Reached target network.target - Network. Jul 11 00:16:42.929745 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.23/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:16:42.939714 ignition[754]: parsing config with SHA512: 907aba4b082fb762bf1ee1c9fd5b9f407ab02191744d1690ea2e08639b35a7ec9c0dd373ed630cc3321d5eb1a0301a781ba392294d17b09b0a1b5c69d3df3feb Jul 11 00:16:42.943542 unknown[754]: fetched base config from "system" Jul 11 00:16:42.943556 unknown[754]: fetched user config from "qemu" Jul 11 00:16:42.944021 ignition[754]: fetch-offline: fetch-offline passed Jul 11 00:16:42.944128 ignition[754]: Ignition finished successfully Jul 11 00:16:42.947094 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:16:42.948752 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:16:42.949964 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 00:16:43.000681 ignition[859]: Ignition 2.21.0 Jul 11 00:16:43.000779 ignition[859]: Stage: kargs Jul 11 00:16:43.001048 ignition[859]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:16:43.001060 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:16:43.005426 ignition[859]: kargs: kargs passed Jul 11 00:16:43.005545 ignition[859]: Ignition finished successfully Jul 11 00:16:43.011131 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 00:16:43.013798 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 00:16:43.068456 ignition[867]: Ignition 2.21.0 Jul 11 00:16:43.068473 ignition[867]: Stage: disks Jul 11 00:16:43.068609 ignition[867]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:16:43.068620 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:16:43.069959 ignition[867]: disks: disks passed Jul 11 00:16:43.070013 ignition[867]: Ignition finished successfully Jul 11 00:16:43.076606 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 00:16:43.078860 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 00:16:43.078953 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:16:43.081170 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:16:43.083592 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:16:43.084118 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:16:43.089515 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 00:16:43.125885 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 11 00:16:43.371069 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 00:16:43.372433 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 00:16:43.491887 kernel: EXT4-fs (vda9): mounted filesystem b9a26173-6c72-4a5b-b1cb-ad71b806f75e r/w with ordered data mode. Quota mode: none. Jul 11 00:16:43.492258 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 00:16:43.494940 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 00:16:43.498852 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:16:43.501640 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 00:16:43.504103 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 00:16:43.504172 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:16:43.505898 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:16:43.515766 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 00:16:43.517423 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 00:16:43.522731 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (885) Jul 11 00:16:43.524728 kernel: BTRFS info (device vda6): first mount of filesystem 047d5cfa-d847-4e53-8f92-c8766cefdad0 Jul 11 00:16:43.524757 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:16:43.526066 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 00:16:43.530787 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:16:43.566048 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:16:43.572372 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:16:43.577942 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:16:43.583143 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:16:43.693399 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 00:16:43.697958 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 00:16:43.700076 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 00:16:43.727420 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 00:16:43.736178 kernel: BTRFS info (device vda6): last unmount of filesystem 047d5cfa-d847-4e53-8f92-c8766cefdad0 Jul 11 00:16:43.766889 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 00:16:43.790854 ignition[999]: INFO : Ignition 2.21.0 Jul 11 00:16:43.790854 ignition[999]: INFO : Stage: mount Jul 11 00:16:43.792835 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:16:43.792835 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:16:43.799123 ignition[999]: INFO : mount: mount passed Jul 11 00:16:43.837148 ignition[999]: INFO : Ignition finished successfully Jul 11 00:16:43.840917 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 00:16:43.844031 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 00:16:43.870446 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:16:43.899749 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1011) Jul 11 00:16:43.905314 kernel: BTRFS info (device vda6): first mount of filesystem 047d5cfa-d847-4e53-8f92-c8766cefdad0 Jul 11 00:16:43.905403 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:16:43.905432 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 00:16:43.909796 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:16:43.972614 ignition[1028]: INFO : Ignition 2.21.0 Jul 11 00:16:43.972614 ignition[1028]: INFO : Stage: files Jul 11 00:16:44.007660 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:16:44.007660 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:16:44.007660 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:16:44.044461 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:16:44.044461 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:16:44.049053 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:16:44.050980 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:16:44.052727 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:16:44.051969 unknown[1028]: wrote ssh authorized keys file for user: core Jul 11 00:16:44.056267 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 11 00:16:44.056267 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 11 00:16:44.105790 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 00:16:44.240533 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 11 00:16:44.240533 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 00:16:44.245573 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 11 00:16:44.610204 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 11 00:16:44.688876 systemd-networkd[851]: eth0: Gained IPv6LL Jul 11 00:16:44.842611 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 00:16:44.842611 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:16:44.846913 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:16:44.846913 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:16:44.846913 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:16:44.846913 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:16:44.923478 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:16:44.923478 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:16:44.927310 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:16:45.096215 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:16:45.155943 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:16:45.155943 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 00:16:45.338510 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 00:16:45.338510 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 00:16:45.405515 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 11 00:16:45.919935 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 11 00:16:46.467660 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 00:16:46.467660 ignition[1028]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 11 00:16:46.489958 ignition[1028]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:16:46.669332 ignition[1028]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:16:46.669332 ignition[1028]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 11 00:16:46.669332 ignition[1028]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 11 00:16:46.669332 ignition[1028]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:16:46.677413 ignition[1028]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:16:46.677413 ignition[1028]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 11 00:16:46.677413 ignition[1028]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:16:46.704047 ignition[1028]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:16:46.718092 ignition[1028]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:16:46.722547 ignition[1028]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:16:46.722547 ignition[1028]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:16:46.726305 ignition[1028]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:16:46.726305 ignition[1028]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:16:46.726305 ignition[1028]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:16:46.726305 ignition[1028]: INFO : files: files passed Jul 11 00:16:46.733494 ignition[1028]: INFO : Ignition finished successfully Jul 11 00:16:46.731470 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 00:16:46.735937 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 00:16:46.750137 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 00:16:46.770095 initrd-setup-root-after-ignition[1056]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 00:16:46.920103 initrd-setup-root-after-ignition[1062]: grep: Jul 11 00:16:46.921721 initrd-setup-root-after-ignition[1058]: grep: Jul 11 00:16:46.923135 initrd-setup-root-after-ignition[1062]: /sysroot/etc/flatcar/enabled-sysext.conf Jul 11 00:16:46.925191 initrd-setup-root-after-ignition[1058]: /sysroot/etc/flatcar/enabled-sysext.conf Jul 11 00:16:46.927837 initrd-setup-root-after-ignition[1062]: : No such file or directory Jul 11 00:16:46.931438 initrd-setup-root-after-ignition[1058]: : No such file or directory Jul 11 00:16:46.931311 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:16:46.936609 initrd-setup-root-after-ignition[1058]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:16:46.936514 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 00:16:46.941821 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:16:46.946277 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 00:16:46.951250 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 00:16:47.023013 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:16:47.023196 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 00:16:47.026890 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 00:16:47.029157 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 00:16:47.031547 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 00:16:47.033261 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 00:16:47.081447 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:16:47.087443 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 00:16:47.121218 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:16:47.121470 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:16:47.141681 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 00:16:47.143728 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:16:47.143970 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:16:47.148164 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 00:16:47.149339 systemd[1]: Stopped target basic.target - Basic System. Jul 11 00:16:47.150456 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 00:16:47.152672 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:16:47.154833 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 00:16:47.156492 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 11 00:16:47.159513 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 00:16:47.160292 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:16:47.163139 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 00:16:47.165425 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 00:16:47.166540 systemd[1]: Stopped target swap.target - Swaps. Jul 11 00:16:47.167026 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:16:47.167192 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:16:47.171215 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:16:47.171820 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:16:47.172270 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 00:16:47.177280 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:16:47.178424 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:16:47.178570 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 00:16:47.182922 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:16:47.183179 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:16:47.184194 systemd[1]: Stopped target paths.target - Path Units. Jul 11 00:16:47.187760 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:16:47.190938 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:16:47.191812 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 00:16:47.192431 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 00:16:47.192763 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:16:47.192904 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:16:47.193315 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:16:47.193412 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:16:47.207866 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:16:47.208106 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:16:47.210005 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:16:47.210234 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 00:16:47.215364 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 00:16:47.216385 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:16:47.216529 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:16:47.221465 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 00:16:47.224937 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:16:47.227307 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:16:47.231006 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:16:47.231279 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:16:47.260738 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:16:47.260927 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 00:16:47.287331 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:16:47.291113 ignition[1084]: INFO : Ignition 2.21.0 Jul 11 00:16:47.291113 ignition[1084]: INFO : Stage: umount Jul 11 00:16:47.293424 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:16:47.293424 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:16:47.293424 ignition[1084]: INFO : umount: umount passed Jul 11 00:16:47.293424 ignition[1084]: INFO : Ignition finished successfully Jul 11 00:16:47.297922 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:16:47.298097 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 00:16:47.298775 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:16:47.298942 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 00:16:47.304025 systemd[1]: Stopped target network.target - Network. Jul 11 00:16:47.305178 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:16:47.305301 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 00:16:47.307599 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:16:47.307693 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 00:16:47.310614 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:16:47.310729 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 00:16:47.314267 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 00:16:47.314331 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 00:16:47.316732 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:16:47.317463 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 00:16:47.318186 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 00:16:47.321087 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 00:16:47.333469 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:16:47.333658 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 00:16:47.340446 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 11 00:16:47.340891 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:16:47.341051 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 00:16:47.350481 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 11 00:16:47.352212 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 11 00:16:47.356176 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:16:47.356246 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:16:47.359891 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 00:16:47.363813 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:16:47.363947 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:16:47.365421 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:16:47.365514 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:16:47.371009 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:16:47.371083 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 00:16:47.373181 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 00:16:47.373244 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:16:47.376862 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:16:47.382534 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 11 00:16:47.382643 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 11 00:16:47.400000 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:16:47.400936 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:16:47.403251 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:16:47.403303 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 00:16:47.405245 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:16:47.405340 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:16:47.408754 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:16:47.408872 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:16:47.413371 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:16:47.413462 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 00:16:47.414404 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:16:47.414469 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:16:47.422038 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 00:16:47.425867 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 11 00:16:47.425979 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:16:47.428434 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:16:47.428530 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:16:47.434102 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:16:47.434227 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:16:47.438886 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 11 00:16:47.438984 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 11 00:16:47.439053 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 11 00:16:47.439559 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:16:47.442993 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 00:16:47.454260 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:16:47.455389 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 00:16:47.456365 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 00:16:47.462402 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 00:16:47.497175 systemd[1]: Switching root. Jul 11 00:16:47.546432 systemd-journald[222]: Journal stopped Jul 11 00:16:49.425686 systemd-journald[222]: Received SIGTERM from PID 1 (systemd). Jul 11 00:16:49.425779 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:16:49.425803 kernel: SELinux: policy capability open_perms=1 Jul 11 00:16:49.425814 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:16:49.425843 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:16:49.425854 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:16:49.425865 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:16:49.425876 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:16:49.425894 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:16:49.425905 kernel: SELinux: policy capability userspace_initial_context=0 Jul 11 00:16:49.425919 kernel: audit: type=1403 audit(1752193008.305:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:16:49.425932 systemd[1]: Successfully loaded SELinux policy in 52.091ms. Jul 11 00:16:49.425958 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.344ms. Jul 11 00:16:49.425977 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 00:16:49.425990 systemd[1]: Detected virtualization kvm. Jul 11 00:16:49.426002 systemd[1]: Detected architecture x86-64. Jul 11 00:16:49.426014 systemd[1]: Detected first boot. Jul 11 00:16:49.426026 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:16:49.426040 zram_generator::config[1129]: No configuration found. Jul 11 00:16:49.426054 kernel: Guest personality initialized and is inactive Jul 11 00:16:49.426065 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 11 00:16:49.426077 kernel: Initialized host personality Jul 11 00:16:49.426088 kernel: NET: Registered PF_VSOCK protocol family Jul 11 00:16:49.426099 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:16:49.426112 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 11 00:16:49.426132 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 00:16:49.426144 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 00:16:49.426159 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 00:16:49.426171 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 00:16:49.426184 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 00:16:49.426196 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 00:16:49.426208 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 00:16:49.426220 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 00:16:49.426232 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 00:16:49.426244 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 00:16:49.426258 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 00:16:49.426270 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:16:49.426283 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:16:49.426295 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 00:16:49.426306 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 00:16:49.426325 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 00:16:49.426337 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:16:49.426349 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 11 00:16:49.426363 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:16:49.426376 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:16:49.426387 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 00:16:49.426449 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 00:16:49.426461 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 00:16:49.426473 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 00:16:49.426485 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:16:49.426503 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:16:49.426515 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:16:49.426529 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:16:49.426541 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 00:16:49.426553 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 00:16:49.426565 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 11 00:16:49.426577 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:16:49.426591 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:16:49.426603 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:16:49.426615 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 00:16:49.426627 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 00:16:49.426642 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 00:16:49.426654 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 00:16:49.426667 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:16:49.426679 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 00:16:49.426691 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 00:16:49.426719 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 00:16:49.426740 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:16:49.426753 systemd[1]: Reached target machines.target - Containers. Jul 11 00:16:49.426765 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 00:16:49.426781 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:16:49.426801 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:16:49.426818 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 00:16:49.426847 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:16:49.426874 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:16:49.426887 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:16:49.426902 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 00:16:49.426915 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:16:49.426930 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:16:49.426942 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 00:16:49.426955 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 00:16:49.426967 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 00:16:49.426979 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 00:16:49.426992 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 00:16:49.427004 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:16:49.427015 kernel: ACPI: bus type drm_connector registered Jul 11 00:16:49.427029 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:16:49.427041 kernel: loop: module loaded Jul 11 00:16:49.427084 systemd-journald[1193]: Collecting audit messages is disabled. Jul 11 00:16:49.427108 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:16:49.427121 systemd-journald[1193]: Journal started Jul 11 00:16:49.427144 systemd-journald[1193]: Runtime Journal (/run/log/journal/67049e000a354e16a3edfcfe6dfda78e) is 6M, max 48.6M, 42.5M free. Jul 11 00:16:49.003020 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:16:49.025756 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 00:16:49.026430 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 00:16:49.430729 kernel: fuse: init (API version 7.41) Jul 11 00:16:49.430800 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 00:16:49.436731 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 11 00:16:49.447945 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:16:49.448106 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 00:16:49.450755 systemd[1]: Stopped verity-setup.service. Jul 11 00:16:49.462723 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:16:49.488740 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:16:49.490577 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 00:16:49.491970 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 00:16:49.493388 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 00:16:49.494638 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 00:16:49.496054 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 00:16:49.521176 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 00:16:49.523179 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:16:49.526178 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:16:49.526479 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 00:16:49.528661 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:16:49.529073 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:16:49.530767 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:16:49.531116 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:16:49.532624 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:16:49.532930 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:16:49.535100 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:16:49.535338 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 00:16:49.537096 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:16:49.537340 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:16:49.538880 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:16:49.540564 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:16:49.542298 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 00:16:49.544141 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 11 00:16:49.571962 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:16:49.580240 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 00:16:49.583089 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 00:16:49.584405 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:16:49.584453 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:16:49.586943 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 11 00:16:49.598246 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 00:16:49.601229 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:16:49.604526 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 00:16:49.609836 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 00:16:49.611369 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:16:49.613893 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 00:16:49.615877 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:16:49.617990 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:16:49.621992 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 00:16:49.625595 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 00:16:49.628997 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 00:16:49.659627 kernel: loop0: detected capacity change from 0 to 146240 Jul 11 00:16:49.659877 systemd-journald[1193]: Time spent on flushing to /var/log/journal/67049e000a354e16a3edfcfe6dfda78e is 16.376ms for 986 entries. Jul 11 00:16:49.659877 systemd-journald[1193]: System Journal (/var/log/journal/67049e000a354e16a3edfcfe6dfda78e) is 8M, max 195.6M, 187.6M free. Jul 11 00:16:49.810052 systemd-journald[1193]: Received client request to flush runtime journal. Jul 11 00:16:49.810106 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:16:49.657059 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:16:49.763248 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 00:16:49.765157 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 00:16:49.767729 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 00:16:49.770766 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 11 00:16:49.775915 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 00:16:49.837143 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 00:16:49.839305 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:16:49.926740 kernel: loop1: detected capacity change from 0 to 221472 Jul 11 00:16:49.983824 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 11 00:16:49.986012 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 00:16:49.991429 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:16:50.002743 kernel: loop2: detected capacity change from 0 to 113872 Jul 11 00:16:50.029650 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:16:50.055359 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jul 11 00:16:50.055385 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jul 11 00:16:50.062729 kernel: loop3: detected capacity change from 0 to 146240 Jul 11 00:16:50.065446 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:16:50.118940 kernel: loop4: detected capacity change from 0 to 221472 Jul 11 00:16:50.135735 kernel: loop5: detected capacity change from 0 to 113872 Jul 11 00:16:50.147069 (sd-merge)[1271]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 00:16:50.147786 (sd-merge)[1271]: Merged extensions into '/usr'. Jul 11 00:16:50.152826 systemd[1]: Reload requested from client PID 1240 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 00:16:50.152857 systemd[1]: Reloading... Jul 11 00:16:50.245732 zram_generator::config[1294]: No configuration found. Jul 11 00:16:50.380190 ldconfig[1235]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:16:50.398868 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:16:50.491490 systemd[1]: Reloading finished in 337 ms. Jul 11 00:16:50.508831 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 00:16:50.512330 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 00:16:50.531945 systemd[1]: Starting ensure-sysext.service... Jul 11 00:16:50.534501 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:16:50.547072 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... Jul 11 00:16:50.547098 systemd[1]: Reloading... Jul 11 00:16:50.570241 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 11 00:16:50.570288 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 11 00:16:50.570663 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:16:50.571211 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 00:16:50.572430 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:16:50.572863 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jul 11 00:16:50.572959 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jul 11 00:16:50.579006 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:16:50.579026 systemd-tmpfiles[1336]: Skipping /boot Jul 11 00:16:50.602524 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:16:50.602549 systemd-tmpfiles[1336]: Skipping /boot Jul 11 00:16:50.658490 zram_generator::config[1363]: No configuration found. Jul 11 00:16:50.767789 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:16:50.871016 systemd[1]: Reloading finished in 323 ms. Jul 11 00:16:50.892019 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 00:16:50.920425 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:16:50.932384 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 00:16:50.935759 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 00:16:50.939314 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 00:16:50.946103 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:16:50.950554 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:16:50.953412 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 00:16:50.958550 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:16:50.958803 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:16:50.960140 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:16:50.966214 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:16:50.971474 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:16:50.973159 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:16:50.973320 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 00:16:50.973476 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:16:50.978301 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 00:16:50.983369 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:16:50.984356 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:16:50.987218 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:16:50.987545 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:16:50.991612 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:16:50.992080 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:16:51.005550 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:16:51.006028 systemd-udevd[1407]: Using default interface naming scheme 'v255'. Jul 11 00:16:51.006649 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:16:51.009371 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:16:51.012143 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:16:51.022435 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:16:51.023579 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:16:51.023843 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 00:16:51.026128 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 00:16:51.027479 augenrules[1438]: No rules Jul 11 00:16:51.035871 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 00:16:51.037189 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:16:51.039912 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:16:51.040306 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 00:16:51.042568 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 00:16:51.044189 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:16:51.046377 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 00:16:51.048194 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:16:51.048438 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:16:51.050123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:16:51.055689 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:16:51.057645 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:16:51.057941 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:16:51.059601 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 00:16:51.086238 systemd[1]: Finished ensure-sysext.service. Jul 11 00:16:51.091936 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:16:51.093394 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 00:16:51.094493 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:16:51.097955 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:16:51.103586 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:16:51.111897 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:16:51.129919 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:16:51.131324 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:16:51.131367 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 00:16:51.133925 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:16:51.139104 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 00:16:51.141382 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:16:51.141426 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:16:51.154421 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:16:51.154750 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:16:51.159839 augenrules[1479]: /sbin/augenrules: No change Jul 11 00:16:51.160966 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:16:51.161281 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:16:51.163306 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:16:51.168873 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:16:51.169250 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:16:51.177592 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 11 00:16:51.181372 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:16:51.183827 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:16:51.186206 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:16:51.190066 augenrules[1508]: No rules Jul 11 00:16:51.193937 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:16:51.195389 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 00:16:51.218147 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 00:16:51.266772 kernel: mousedev: PS/2 mouse device common for all mice Jul 11 00:16:51.293757 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 11 00:16:51.297550 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:16:51.301057 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 00:16:51.302731 kernel: ACPI: button: Power Button [PWRF] Jul 11 00:16:51.309278 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 11 00:16:51.309599 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 11 00:16:51.373946 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 00:16:51.411144 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:16:51.493003 kernel: kvm_amd: TSC scaling supported Jul 11 00:16:51.493095 kernel: kvm_amd: Nested Virtualization enabled Jul 11 00:16:51.493150 kernel: kvm_amd: Nested Paging enabled Jul 11 00:16:51.493175 kernel: kvm_amd: LBR virtualization supported Jul 11 00:16:51.494086 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 11 00:16:51.494130 kernel: kvm_amd: Virtual GIF supported Jul 11 00:16:51.542756 kernel: EDAC MC: Ver: 3.0.0 Jul 11 00:16:51.630041 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:16:51.633043 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 00:16:51.635019 systemd-networkd[1484]: lo: Link UP Jul 11 00:16:51.635034 systemd-networkd[1484]: lo: Gained carrier Jul 11 00:16:51.635053 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 00:16:51.637017 systemd-networkd[1484]: Enumeration completed Jul 11 00:16:51.637148 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:16:51.637652 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:16:51.637664 systemd-networkd[1484]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:16:51.638391 systemd-networkd[1484]: eth0: Link UP Jul 11 00:16:51.638724 systemd-networkd[1484]: eth0: Gained carrier Jul 11 00:16:51.638753 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:16:51.640942 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 11 00:16:51.644356 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 00:16:51.647302 systemd-resolved[1405]: Positive Trust Anchors: Jul 11 00:16:51.647337 systemd-resolved[1405]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:16:51.647382 systemd-resolved[1405]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:16:51.652663 systemd-resolved[1405]: Defaulting to hostname 'linux'. Jul 11 00:16:51.653784 systemd-networkd[1484]: eth0: DHCPv4 address 10.0.0.23/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:16:51.654843 systemd-timesyncd[1488]: Network configuration changed, trying to establish connection. Jul 11 00:16:51.655140 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:16:51.671274 systemd[1]: Reached target network.target - Network. Jul 11 00:16:51.672510 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:16:51.673874 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:16:51.674077 systemd-timesyncd[1488]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:16:51.674146 systemd-timesyncd[1488]: Initial clock synchronization to Fri 2025-07-11 00:16:51.780622 UTC. Jul 11 00:16:51.675334 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 00:16:51.676846 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 00:16:51.678294 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 11 00:16:51.679900 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 00:16:51.681217 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 00:16:51.682670 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 00:16:51.684155 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:16:51.684188 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:16:51.685248 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:16:51.687410 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 00:16:51.690758 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 00:16:51.694854 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 11 00:16:51.696315 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 11 00:16:51.697569 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 11 00:16:51.701292 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 00:16:51.702764 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 11 00:16:51.704904 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 11 00:16:51.706368 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 00:16:51.708807 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:16:51.709829 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:16:51.710911 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:16:51.710959 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:16:51.712503 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 00:16:51.715227 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 00:16:51.717847 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 00:16:51.720812 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 00:16:51.723826 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 00:16:51.724930 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 00:16:51.726090 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 11 00:16:51.728217 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 00:16:51.732782 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 00:16:51.735775 jq[1564]: false Jul 11 00:16:51.736580 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 00:16:51.739821 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 00:16:51.745643 extend-filesystems[1565]: Found /dev/vda6 Jul 11 00:16:51.746978 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing passwd entry cache Jul 11 00:16:51.746668 oslogin_cache_refresh[1566]: Refreshing passwd entry cache Jul 11 00:16:51.748340 extend-filesystems[1565]: Found /dev/vda9 Jul 11 00:16:51.750232 extend-filesystems[1565]: Checking size of /dev/vda9 Jul 11 00:16:51.753198 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 00:16:51.755632 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:16:51.756396 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:16:51.757835 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 00:16:51.757555 oslogin_cache_refresh[1566]: Failure getting users, quitting Jul 11 00:16:51.762799 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting users, quitting Jul 11 00:16:51.762799 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 11 00:16:51.762799 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing group entry cache Jul 11 00:16:51.759997 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 00:16:51.757585 oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 11 00:16:51.757665 oslogin_cache_refresh[1566]: Refreshing group entry cache Jul 11 00:16:51.763776 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 00:16:51.764967 oslogin_cache_refresh[1566]: Failure getting groups, quitting Jul 11 00:16:51.766371 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting groups, quitting Jul 11 00:16:51.766371 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 11 00:16:51.765960 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:16:51.764980 oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 11 00:16:51.771221 extend-filesystems[1565]: Resized partition /dev/vda9 Jul 11 00:16:51.772659 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 00:16:51.773221 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 11 00:16:51.773477 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 11 00:16:51.775379 jq[1587]: true Jul 11 00:16:51.777033 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:16:51.777315 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 00:16:51.780163 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:16:51.780421 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 00:16:51.793576 extend-filesystems[1600]: resize2fs 1.47.2 (1-Jan-2025) Jul 11 00:16:51.801984 (ntainerd)[1596]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 00:16:51.808180 jq[1593]: true Jul 11 00:16:51.818000 update_engine[1585]: I20250711 00:16:51.817513 1585 main.cc:92] Flatcar Update Engine starting Jul 11 00:16:51.847512 tar[1592]: linux-amd64/helm Jul 11 00:16:51.848831 systemd-logind[1582]: Watching system buttons on /dev/input/event2 (Power Button) Jul 11 00:16:51.848860 systemd-logind[1582]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 11 00:16:51.850214 systemd-logind[1582]: New seat seat0. Jul 11 00:16:51.860194 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 00:16:51.867011 dbus-daemon[1562]: [system] SELinux support is enabled Jul 11 00:16:51.867161 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 00:16:51.871672 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:16:51.872946 update_engine[1585]: I20250711 00:16:51.872874 1585 update_check_scheduler.cc:74] Next update check in 2m17s Jul 11 00:16:51.873230 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 00:16:51.873744 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:16:51.874785 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:16:51.874822 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 00:16:51.877462 systemd[1]: Started update-engine.service - Update Engine. Jul 11 00:16:51.878434 dbus-daemon[1562]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 11 00:16:51.881951 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 00:16:51.915235 sshd_keygen[1589]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:16:51.941520 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 00:16:51.946394 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 00:16:51.967878 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:16:51.968258 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 00:16:51.976582 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 00:16:52.025509 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 00:16:52.030948 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 00:16:52.033797 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 11 00:16:52.035420 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 00:16:52.086141 locksmithd[1624]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:16:52.171740 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:16:52.224227 extend-filesystems[1600]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:16:52.224227 extend-filesystems[1600]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:16:52.224227 extend-filesystems[1600]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:16:52.230115 extend-filesystems[1565]: Resized filesystem in /dev/vda9 Jul 11 00:16:52.225732 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:16:52.226130 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 00:16:52.232723 bash[1623]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:16:52.235978 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 00:16:52.239406 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 00:16:52.431618 tar[1592]: linux-amd64/LICENSE Jul 11 00:16:52.431805 tar[1592]: linux-amd64/README.md Jul 11 00:16:52.458240 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 00:16:52.496660 containerd[1596]: time="2025-07-11T00:16:52Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 11 00:16:52.498168 containerd[1596]: time="2025-07-11T00:16:52.498117219Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 11 00:16:52.516263 containerd[1596]: time="2025-07-11T00:16:52.516176805Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.985µs" Jul 11 00:16:52.516263 containerd[1596]: time="2025-07-11T00:16:52.516235524Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 11 00:16:52.516263 containerd[1596]: time="2025-07-11T00:16:52.516260334Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 11 00:16:52.516572 containerd[1596]: time="2025-07-11T00:16:52.516537001Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 11 00:16:52.516572 containerd[1596]: time="2025-07-11T00:16:52.516563083Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 11 00:16:52.516621 containerd[1596]: time="2025-07-11T00:16:52.516593269Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 00:16:52.516714 containerd[1596]: time="2025-07-11T00:16:52.516672713Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 00:16:52.516714 containerd[1596]: time="2025-07-11T00:16:52.516688084Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 00:16:52.517106 containerd[1596]: time="2025-07-11T00:16:52.517053253Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 00:16:52.517106 containerd[1596]: time="2025-07-11T00:16:52.517074755Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 00:16:52.517106 containerd[1596]: time="2025-07-11T00:16:52.517085627Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 00:16:52.517106 containerd[1596]: time="2025-07-11T00:16:52.517093414Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 11 00:16:52.517257 containerd[1596]: time="2025-07-11T00:16:52.517240209Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 11 00:16:52.517553 containerd[1596]: time="2025-07-11T00:16:52.517513831Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 00:16:52.517589 containerd[1596]: time="2025-07-11T00:16:52.517552681Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 00:16:52.517589 containerd[1596]: time="2025-07-11T00:16:52.517563543Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 11 00:16:52.517668 containerd[1596]: time="2025-07-11T00:16:52.517644258Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 11 00:16:52.518077 containerd[1596]: time="2025-07-11T00:16:52.517981167Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 11 00:16:52.518243 containerd[1596]: time="2025-07-11T00:16:52.518207467Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:16:52.526916 containerd[1596]: time="2025-07-11T00:16:52.526852329Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 11 00:16:52.526972 containerd[1596]: time="2025-07-11T00:16:52.526937149Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 11 00:16:52.526972 containerd[1596]: time="2025-07-11T00:16:52.526955524Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 11 00:16:52.526972 containerd[1596]: time="2025-07-11T00:16:52.526970210Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 11 00:16:52.527047 containerd[1596]: time="2025-07-11T00:16:52.526988766Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 11 00:16:52.527047 containerd[1596]: time="2025-07-11T00:16:52.527003713Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 11 00:16:52.527047 containerd[1596]: time="2025-07-11T00:16:52.527016402Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 11 00:16:52.527047 containerd[1596]: time="2025-07-11T00:16:52.527028927Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 11 00:16:52.527047 containerd[1596]: time="2025-07-11T00:16:52.527041988Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 11 00:16:52.527149 containerd[1596]: time="2025-07-11T00:16:52.527054928Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 11 00:16:52.527149 containerd[1596]: time="2025-07-11T00:16:52.527068483Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 11 00:16:52.527149 containerd[1596]: time="2025-07-11T00:16:52.527087020Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 11 00:16:52.527413 containerd[1596]: time="2025-07-11T00:16:52.527369084Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 11 00:16:52.527457 containerd[1596]: time="2025-07-11T00:16:52.527414055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 11 00:16:52.527457 containerd[1596]: time="2025-07-11T00:16:52.527435276Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 11 00:16:52.527494 containerd[1596]: time="2025-07-11T00:16:52.527480298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 11 00:16:52.527528 containerd[1596]: time="2025-07-11T00:16:52.527494116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 11 00:16:52.527528 containerd[1596]: time="2025-07-11T00:16:52.527508265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 11 00:16:52.527641 containerd[1596]: time="2025-07-11T00:16:52.527549445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 11 00:16:52.527641 containerd[1596]: time="2025-07-11T00:16:52.527594568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 11 00:16:52.527727 containerd[1596]: time="2025-07-11T00:16:52.527642323Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 11 00:16:52.527727 containerd[1596]: time="2025-07-11T00:16:52.527672710Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 11 00:16:52.527727 containerd[1596]: time="2025-07-11T00:16:52.527689019Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 11 00:16:52.528068 containerd[1596]: time="2025-07-11T00:16:52.527832678Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 11 00:16:52.528068 containerd[1596]: time="2025-07-11T00:16:52.527863601Z" level=info msg="Start snapshots syncer" Jul 11 00:16:52.528068 containerd[1596]: time="2025-07-11T00:16:52.527904790Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 11 00:16:52.528370 containerd[1596]: time="2025-07-11T00:16:52.528291623Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 11 00:16:52.528370 containerd[1596]: time="2025-07-11T00:16:52.528369907Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 11 00:16:52.529353 containerd[1596]: time="2025-07-11T00:16:52.529308966Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 11 00:16:52.529495 containerd[1596]: time="2025-07-11T00:16:52.529450961Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 11 00:16:52.529533 containerd[1596]: time="2025-07-11T00:16:52.529492877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 11 00:16:52.529533 containerd[1596]: time="2025-07-11T00:16:52.529508600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 11 00:16:52.529533 containerd[1596]: time="2025-07-11T00:16:52.529520522Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 11 00:16:52.529647 containerd[1596]: time="2025-07-11T00:16:52.529536709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 11 00:16:52.529647 containerd[1596]: time="2025-07-11T00:16:52.529557606Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 11 00:16:52.529647 containerd[1596]: time="2025-07-11T00:16:52.529584161Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 11 00:16:52.529647 containerd[1596]: time="2025-07-11T00:16:52.529636828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 11 00:16:52.529750 containerd[1596]: time="2025-07-11T00:16:52.529651120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 11 00:16:52.529750 containerd[1596]: time="2025-07-11T00:16:52.529664403Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 11 00:16:52.529798 containerd[1596]: time="2025-07-11T00:16:52.529742979Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 00:16:52.529798 containerd[1596]: time="2025-07-11T00:16:52.529771854Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 00:16:52.529798 containerd[1596]: time="2025-07-11T00:16:52.529780911Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 00:16:52.529798 containerd[1596]: time="2025-07-11T00:16:52.529791198Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 00:16:52.529798 containerd[1596]: time="2025-07-11T00:16:52.529799106Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 11 00:16:52.529900 containerd[1596]: time="2025-07-11T00:16:52.529810048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 11 00:16:52.529900 containerd[1596]: time="2025-07-11T00:16:52.529819993Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 11 00:16:52.529900 containerd[1596]: time="2025-07-11T00:16:52.529838721Z" level=info msg="runtime interface created" Jul 11 00:16:52.529900 containerd[1596]: time="2025-07-11T00:16:52.529844168Z" level=info msg="created NRI interface" Jul 11 00:16:52.529900 containerd[1596]: time="2025-07-11T00:16:52.529852155Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 11 00:16:52.529900 containerd[1596]: time="2025-07-11T00:16:52.529868807Z" level=info msg="Connect containerd service" Jul 11 00:16:52.529900 containerd[1596]: time="2025-07-11T00:16:52.529898791Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 00:16:52.530841 containerd[1596]: time="2025-07-11T00:16:52.530799051Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:16:52.706670 containerd[1596]: time="2025-07-11T00:16:52.706533939Z" level=info msg="Start subscribing containerd event" Jul 11 00:16:52.706853 containerd[1596]: time="2025-07-11T00:16:52.706720048Z" level=info msg="Start recovering state" Jul 11 00:16:52.706853 containerd[1596]: time="2025-07-11T00:16:52.706814389Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:16:52.706912 containerd[1596]: time="2025-07-11T00:16:52.706879138Z" level=info msg="Start event monitor" Jul 11 00:16:52.706912 containerd[1596]: time="2025-07-11T00:16:52.706894852Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:16:52.706912 containerd[1596]: time="2025-07-11T00:16:52.706905956Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:16:52.706986 containerd[1596]: time="2025-07-11T00:16:52.706916859Z" level=info msg="Start streaming server" Jul 11 00:16:52.706986 containerd[1596]: time="2025-07-11T00:16:52.706937524Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 11 00:16:52.706986 containerd[1596]: time="2025-07-11T00:16:52.706950061Z" level=info msg="runtime interface starting up..." Jul 11 00:16:52.706986 containerd[1596]: time="2025-07-11T00:16:52.706958411Z" level=info msg="starting plugins..." Jul 11 00:16:52.706986 containerd[1596]: time="2025-07-11T00:16:52.706980761Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 11 00:16:52.709268 containerd[1596]: time="2025-07-11T00:16:52.707840012Z" level=info msg="containerd successfully booted in 0.211924s" Jul 11 00:16:52.708088 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 00:16:53.009086 systemd-networkd[1484]: eth0: Gained IPv6LL Jul 11 00:16:53.012640 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 00:16:53.014687 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 00:16:53.017497 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 00:16:53.021082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:16:53.031217 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 00:16:53.068228 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 00:16:53.070980 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:16:53.071347 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 00:16:53.074855 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 00:16:53.622565 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 00:16:53.663509 systemd[1]: Started sshd@0-10.0.0.23:22-10.0.0.1:56832.service - OpenSSH per-connection server daemon (10.0.0.1:56832). Jul 11 00:16:53.739201 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 56832 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:16:53.741644 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:53.750167 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 00:16:53.754385 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 00:16:53.763937 systemd-logind[1582]: New session 1 of user core. Jul 11 00:16:53.789922 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 00:16:53.794025 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 00:16:53.848252 (systemd)[1695]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:16:53.852162 systemd-logind[1582]: New session c1 of user core. Jul 11 00:16:54.048073 systemd[1695]: Queued start job for default target default.target. Jul 11 00:16:54.067103 systemd[1695]: Created slice app.slice - User Application Slice. Jul 11 00:16:54.067132 systemd[1695]: Reached target paths.target - Paths. Jul 11 00:16:54.067185 systemd[1695]: Reached target timers.target - Timers. Jul 11 00:16:54.069069 systemd[1695]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 00:16:54.086161 systemd[1695]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 00:16:54.086335 systemd[1695]: Reached target sockets.target - Sockets. Jul 11 00:16:54.086385 systemd[1695]: Reached target basic.target - Basic System. Jul 11 00:16:54.086437 systemd[1695]: Reached target default.target - Main User Target. Jul 11 00:16:54.086486 systemd[1695]: Startup finished in 220ms. Jul 11 00:16:54.087422 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 00:16:54.113892 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 00:16:54.251874 systemd[1]: Started sshd@1-10.0.0.23:22-10.0.0.1:56846.service - OpenSSH per-connection server daemon (10.0.0.1:56846). Jul 11 00:16:54.322855 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 56846 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:16:54.326464 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:54.336903 systemd-logind[1582]: New session 2 of user core. Jul 11 00:16:54.343994 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 00:16:54.449372 systemd[1]: Started sshd@2-10.0.0.23:22-10.0.0.1:56852.service - OpenSSH per-connection server daemon (10.0.0.1:56852). Jul 11 00:16:54.512291 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 56852 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:16:54.514389 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:54.520957 systemd-logind[1582]: New session 3 of user core. Jul 11 00:16:54.529012 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 00:16:54.546694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:16:54.549676 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 00:16:54.551647 systemd[1]: Startup finished in 3.665s (kernel) + 8.670s (initrd) + 6.296s (userspace) = 18.631s. Jul 11 00:16:54.587546 (kubelet)[1719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:16:54.599361 sshd[1717]: Connection closed by 10.0.0.1 port 56852 Jul 11 00:16:54.599396 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Jul 11 00:16:54.609790 systemd[1]: sshd@2-10.0.0.23:22-10.0.0.1:56852.service: Deactivated successfully. Jul 11 00:16:54.612674 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:16:54.614592 systemd-logind[1582]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:16:54.618345 systemd-logind[1582]: Removed session 3. Jul 11 00:16:54.638428 sshd[1708]: Connection closed by 10.0.0.1 port 56846 Jul 11 00:16:54.638829 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Jul 11 00:16:54.642214 systemd[1]: sshd@1-10.0.0.23:22-10.0.0.1:56846.service: Deactivated successfully. Jul 11 00:16:54.644432 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:16:54.647039 systemd-logind[1582]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:16:54.648230 systemd-logind[1582]: Removed session 2. Jul 11 00:16:55.343950 kubelet[1719]: E0711 00:16:55.343840 1719 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:16:55.349406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:16:55.349626 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:16:55.350071 systemd[1]: kubelet.service: Consumed 1.897s CPU time, 265.5M memory peak. Jul 11 00:17:04.670521 systemd[1]: Started sshd@3-10.0.0.23:22-10.0.0.1:44468.service - OpenSSH per-connection server daemon (10.0.0.1:44468). Jul 11 00:17:04.738162 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 44468 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:17:04.740122 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:04.745825 systemd-logind[1582]: New session 4 of user core. Jul 11 00:17:04.756985 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 00:17:04.816916 sshd[1741]: Connection closed by 10.0.0.1 port 44468 Jul 11 00:17:04.817385 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:04.845540 systemd[1]: sshd@3-10.0.0.23:22-10.0.0.1:44468.service: Deactivated successfully. Jul 11 00:17:04.849150 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:17:04.850866 systemd-logind[1582]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:17:04.854901 systemd[1]: Started sshd@4-10.0.0.23:22-10.0.0.1:44472.service - OpenSSH per-connection server daemon (10.0.0.1:44472). Jul 11 00:17:04.855574 systemd-logind[1582]: Removed session 4. Jul 11 00:17:04.920006 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 44472 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:17:04.921968 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:04.930018 systemd-logind[1582]: New session 5 of user core. Jul 11 00:17:04.944122 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 00:17:04.999340 sshd[1749]: Connection closed by 10.0.0.1 port 44472 Jul 11 00:17:04.999491 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:05.016935 systemd[1]: sshd@4-10.0.0.23:22-10.0.0.1:44472.service: Deactivated successfully. Jul 11 00:17:05.019997 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:17:05.021605 systemd-logind[1582]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:17:05.027105 systemd[1]: Started sshd@5-10.0.0.23:22-10.0.0.1:44482.service - OpenSSH per-connection server daemon (10.0.0.1:44482). Jul 11 00:17:05.028344 systemd-logind[1582]: Removed session 5. Jul 11 00:17:05.097330 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 44482 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:17:05.099911 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:05.109203 systemd-logind[1582]: New session 6 of user core. Jul 11 00:17:05.119271 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 00:17:05.183958 sshd[1757]: Connection closed by 10.0.0.1 port 44482 Jul 11 00:17:05.185756 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:05.201188 systemd[1]: sshd@5-10.0.0.23:22-10.0.0.1:44482.service: Deactivated successfully. Jul 11 00:17:05.204529 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:17:05.206049 systemd-logind[1582]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:17:05.211287 systemd[1]: Started sshd@6-10.0.0.23:22-10.0.0.1:44484.service - OpenSSH per-connection server daemon (10.0.0.1:44484). Jul 11 00:17:05.212618 systemd-logind[1582]: Removed session 6. Jul 11 00:17:05.285349 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 44484 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:17:05.287579 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:05.293055 systemd-logind[1582]: New session 7 of user core. Jul 11 00:17:05.302273 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 00:17:05.368758 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 00:17:05.369166 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:17:05.370553 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:17:05.372863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:17:05.391067 sudo[1766]: pam_unix(sudo:session): session closed for user root Jul 11 00:17:05.394695 sshd[1765]: Connection closed by 10.0.0.1 port 44484 Jul 11 00:17:05.396448 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:05.402314 systemd[1]: sshd@6-10.0.0.23:22-10.0.0.1:44484.service: Deactivated successfully. Jul 11 00:17:05.405458 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:17:05.407770 systemd-logind[1582]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:17:05.421685 systemd[1]: Started sshd@7-10.0.0.23:22-10.0.0.1:44486.service - OpenSSH per-connection server daemon (10.0.0.1:44486). Jul 11 00:17:05.423263 systemd-logind[1582]: Removed session 7. Jul 11 00:17:05.492726 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 44486 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:17:05.494448 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:05.499385 systemd-logind[1582]: New session 8 of user core. Jul 11 00:17:05.514937 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 00:17:05.572360 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 00:17:05.573215 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:17:05.786800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:05.802240 (kubelet)[1787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:17:06.185540 kubelet[1787]: E0711 00:17:06.185337 1787 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:17:06.192755 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:17:06.192989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:17:06.193459 systemd[1]: kubelet.service: Consumed 307ms CPU time, 110.1M memory peak. Jul 11 00:17:06.215159 sudo[1780]: pam_unix(sudo:session): session closed for user root Jul 11 00:17:06.225183 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 11 00:17:06.225658 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:17:06.240136 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 00:17:06.301481 augenrules[1816]: No rules Jul 11 00:17:06.303459 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:17:06.303863 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 00:17:06.305169 sudo[1779]: pam_unix(sudo:session): session closed for user root Jul 11 00:17:06.306959 sshd[1778]: Connection closed by 10.0.0.1 port 44486 Jul 11 00:17:06.307243 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:06.324160 systemd[1]: sshd@7-10.0.0.23:22-10.0.0.1:44486.service: Deactivated successfully. Jul 11 00:17:06.326851 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:17:06.327964 systemd-logind[1582]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:17:06.332130 systemd[1]: Started sshd@8-10.0.0.23:22-10.0.0.1:44500.service - OpenSSH per-connection server daemon (10.0.0.1:44500). Jul 11 00:17:06.333019 systemd-logind[1582]: Removed session 8. Jul 11 00:17:06.390895 sshd[1825]: Accepted publickey for core from 10.0.0.1 port 44500 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:17:06.392891 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:06.398755 systemd-logind[1582]: New session 9 of user core. Jul 11 00:17:06.406993 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 00:17:06.461568 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:17:06.461909 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:17:07.224775 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 00:17:07.251554 (dockerd)[1849]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 00:17:07.963741 dockerd[1849]: time="2025-07-11T00:17:07.963603533Z" level=info msg="Starting up" Jul 11 00:17:07.964510 dockerd[1849]: time="2025-07-11T00:17:07.964482005Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 11 00:17:08.209682 dockerd[1849]: time="2025-07-11T00:17:08.209592595Z" level=info msg="Loading containers: start." Jul 11 00:17:08.231747 kernel: Initializing XFRM netlink socket Jul 11 00:17:08.656891 systemd-networkd[1484]: docker0: Link UP Jul 11 00:17:08.665835 dockerd[1849]: time="2025-07-11T00:17:08.665715204Z" level=info msg="Loading containers: done." Jul 11 00:17:08.693914 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck736200262-merged.mount: Deactivated successfully. Jul 11 00:17:08.698737 dockerd[1849]: time="2025-07-11T00:17:08.698371204Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:17:08.698737 dockerd[1849]: time="2025-07-11T00:17:08.698534747Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 11 00:17:08.698737 dockerd[1849]: time="2025-07-11T00:17:08.698752189Z" level=info msg="Initializing buildkit" Jul 11 00:17:08.964744 dockerd[1849]: time="2025-07-11T00:17:08.964537771Z" level=info msg="Completed buildkit initialization" Jul 11 00:17:08.969415 dockerd[1849]: time="2025-07-11T00:17:08.969371453Z" level=info msg="Daemon has completed initialization" Jul 11 00:17:08.969590 dockerd[1849]: time="2025-07-11T00:17:08.969489614Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:17:08.969786 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 00:17:10.625885 containerd[1596]: time="2025-07-11T00:17:10.625813161Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 11 00:17:11.871416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1980966487.mount: Deactivated successfully. Jul 11 00:17:14.707245 containerd[1596]: time="2025-07-11T00:17:14.707148162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:14.711143 containerd[1596]: time="2025-07-11T00:17:14.710946647Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 11 00:17:14.714376 containerd[1596]: time="2025-07-11T00:17:14.714244639Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:14.726314 containerd[1596]: time="2025-07-11T00:17:14.726194563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:14.728685 containerd[1596]: time="2025-07-11T00:17:14.728325803Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 4.102438761s" Jul 11 00:17:14.728685 containerd[1596]: time="2025-07-11T00:17:14.728447196Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 11 00:17:14.730988 containerd[1596]: time="2025-07-11T00:17:14.730930593Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 11 00:17:16.443680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 00:17:16.446604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:17:16.742590 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:16.747525 (kubelet)[2127]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:17:16.868523 kubelet[2127]: E0711 00:17:16.868379 2127 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:17:16.873664 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:17:16.874098 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:17:16.874576 systemd[1]: kubelet.service: Consumed 373ms CPU time, 110.5M memory peak. Jul 11 00:17:17.397563 containerd[1596]: time="2025-07-11T00:17:17.397455829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:17.400734 containerd[1596]: time="2025-07-11T00:17:17.400607773Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 11 00:17:17.403038 containerd[1596]: time="2025-07-11T00:17:17.402881648Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:17.406880 containerd[1596]: time="2025-07-11T00:17:17.406814389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:17.408019 containerd[1596]: time="2025-07-11T00:17:17.407880912Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 2.676698143s" Jul 11 00:17:17.408019 containerd[1596]: time="2025-07-11T00:17:17.407932556Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 11 00:17:17.408717 containerd[1596]: time="2025-07-11T00:17:17.408623386Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 11 00:17:19.908839 containerd[1596]: time="2025-07-11T00:17:19.908602369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:19.950180 containerd[1596]: time="2025-07-11T00:17:19.950095341Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 11 00:17:19.985355 containerd[1596]: time="2025-07-11T00:17:19.985266629Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:20.041657 containerd[1596]: time="2025-07-11T00:17:20.041507576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:20.042798 containerd[1596]: time="2025-07-11T00:17:20.042741266Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 2.634062937s" Jul 11 00:17:20.042798 containerd[1596]: time="2025-07-11T00:17:20.042792629Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 11 00:17:20.043637 containerd[1596]: time="2025-07-11T00:17:20.043435796Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 11 00:17:22.059264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2597504123.mount: Deactivated successfully. Jul 11 00:17:22.560580 containerd[1596]: time="2025-07-11T00:17:22.560463063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:22.561035 containerd[1596]: time="2025-07-11T00:17:22.560773935Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 11 00:17:22.562384 containerd[1596]: time="2025-07-11T00:17:22.562336263Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:22.564816 containerd[1596]: time="2025-07-11T00:17:22.564774152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:22.565479 containerd[1596]: time="2025-07-11T00:17:22.565414508Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 2.521942185s" Jul 11 00:17:22.565479 containerd[1596]: time="2025-07-11T00:17:22.565471720Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 11 00:17:22.566270 containerd[1596]: time="2025-07-11T00:17:22.566224905Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 00:17:23.396215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1125171920.mount: Deactivated successfully. Jul 11 00:17:24.980639 containerd[1596]: time="2025-07-11T00:17:24.980504669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:24.981782 containerd[1596]: time="2025-07-11T00:17:24.981637067Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 11 00:17:24.983513 containerd[1596]: time="2025-07-11T00:17:24.983447780Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:24.987639 containerd[1596]: time="2025-07-11T00:17:24.987546893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:24.989013 containerd[1596]: time="2025-07-11T00:17:24.988951961Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.422684348s" Jul 11 00:17:24.989013 containerd[1596]: time="2025-07-11T00:17:24.989000995Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 11 00:17:24.989939 containerd[1596]: time="2025-07-11T00:17:24.989849241Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:17:25.681394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3438097492.mount: Deactivated successfully. Jul 11 00:17:25.794480 containerd[1596]: time="2025-07-11T00:17:25.794347367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:17:25.821693 containerd[1596]: time="2025-07-11T00:17:25.820970749Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 11 00:17:25.824503 containerd[1596]: time="2025-07-11T00:17:25.824393573Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:17:25.831108 containerd[1596]: time="2025-07-11T00:17:25.830924823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:17:25.832411 containerd[1596]: time="2025-07-11T00:17:25.831567407Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 841.659285ms" Jul 11 00:17:25.832411 containerd[1596]: time="2025-07-11T00:17:25.831599713Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 11 00:17:25.832739 containerd[1596]: time="2025-07-11T00:17:25.832686720Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 11 00:17:27.124768 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 11 00:17:27.127167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:17:28.635784 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:28.653126 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:17:28.710008 kubelet[2213]: E0711 00:17:28.709926 2213 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:17:28.714370 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:17:28.714610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:17:28.715034 systemd[1]: kubelet.service: Consumed 263ms CPU time, 111.4M memory peak. Jul 11 00:17:29.756484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3479252493.mount: Deactivated successfully. Jul 11 00:17:33.311044 containerd[1596]: time="2025-07-11T00:17:33.309919818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:33.313487 containerd[1596]: time="2025-07-11T00:17:33.313414951Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 11 00:17:33.317337 containerd[1596]: time="2025-07-11T00:17:33.317266783Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:33.324129 containerd[1596]: time="2025-07-11T00:17:33.323991377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:33.325652 containerd[1596]: time="2025-07-11T00:17:33.325510312Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 7.492684224s" Jul 11 00:17:33.325931 containerd[1596]: time="2025-07-11T00:17:33.325880310Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 11 00:17:35.788630 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:35.788903 systemd[1]: kubelet.service: Consumed 263ms CPU time, 111.4M memory peak. Jul 11 00:17:35.791600 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:17:35.821768 systemd[1]: Reload requested from client PID 2304 ('systemctl') (unit session-9.scope)... Jul 11 00:17:35.821793 systemd[1]: Reloading... Jul 11 00:17:35.937746 zram_generator::config[2351]: No configuration found. Jul 11 00:17:36.151148 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:17:36.312797 systemd[1]: Reloading finished in 490 ms. Jul 11 00:17:36.396320 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 00:17:36.396453 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 00:17:36.396950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:36.397024 systemd[1]: kubelet.service: Consumed 205ms CPU time, 98.3M memory peak. Jul 11 00:17:36.399579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:17:36.656508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:36.681430 (kubelet)[2396]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:17:36.735480 kubelet[2396]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:17:36.735480 kubelet[2396]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 00:17:36.735480 kubelet[2396]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:17:36.736081 kubelet[2396]: I0711 00:17:36.735577 2396 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:17:36.948814 update_engine[1585]: I20250711 00:17:36.948509 1585 update_attempter.cc:509] Updating boot flags... Jul 11 00:17:37.250658 kubelet[2396]: I0711 00:17:37.250499 2396 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 00:17:37.250658 kubelet[2396]: I0711 00:17:37.250536 2396 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:17:37.250918 kubelet[2396]: I0711 00:17:37.250883 2396 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 00:17:37.287153 kubelet[2396]: E0711 00:17:37.286852 2396 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:37.290977 kubelet[2396]: I0711 00:17:37.290776 2396 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:17:37.356386 kubelet[2396]: I0711 00:17:37.356328 2396 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 00:17:37.367617 kubelet[2396]: I0711 00:17:37.367565 2396 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:17:37.367765 kubelet[2396]: I0711 00:17:37.367753 2396 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 00:17:37.367952 kubelet[2396]: I0711 00:17:37.367900 2396 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:17:37.368169 kubelet[2396]: I0711 00:17:37.367935 2396 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:17:37.368322 kubelet[2396]: I0711 00:17:37.368186 2396 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:17:37.368322 kubelet[2396]: I0711 00:17:37.368198 2396 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 00:17:37.370777 kubelet[2396]: I0711 00:17:37.368602 2396 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:17:37.375315 kubelet[2396]: I0711 00:17:37.375267 2396 kubelet.go:408] "Attempting to sync node with API server" Jul 11 00:17:37.375431 kubelet[2396]: I0711 00:17:37.375333 2396 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:17:37.375431 kubelet[2396]: I0711 00:17:37.375393 2396 kubelet.go:314] "Adding apiserver pod source" Jul 11 00:17:37.375533 kubelet[2396]: I0711 00:17:37.375450 2396 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:17:37.378377 kubelet[2396]: W0711 00:17:37.378308 2396 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jul 11 00:17:37.378453 kubelet[2396]: E0711 00:17:37.378396 2396 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:37.379900 kubelet[2396]: W0711 00:17:37.379799 2396 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jul 11 00:17:37.379959 kubelet[2396]: E0711 00:17:37.379912 2396 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:37.381464 kubelet[2396]: I0711 00:17:37.381431 2396 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 11 00:17:37.382025 kubelet[2396]: I0711 00:17:37.381995 2396 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:17:37.382158 kubelet[2396]: W0711 00:17:37.382128 2396 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:17:37.387134 kubelet[2396]: I0711 00:17:37.387084 2396 server.go:1274] "Started kubelet" Jul 11 00:17:37.388240 kubelet[2396]: I0711 00:17:37.388047 2396 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:17:37.389944 kubelet[2396]: I0711 00:17:37.389923 2396 server.go:449] "Adding debug handlers to kubelet server" Jul 11 00:17:37.390676 kubelet[2396]: I0711 00:17:37.390650 2396 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:17:37.393511 kubelet[2396]: I0711 00:17:37.388127 2396 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:17:37.394506 kubelet[2396]: I0711 00:17:37.394476 2396 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:17:37.394902 kubelet[2396]: I0711 00:17:37.394870 2396 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:17:37.396368 kubelet[2396]: E0711 00:17:37.396329 2396 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:17:37.397061 kubelet[2396]: E0711 00:17:37.397034 2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:37.397101 kubelet[2396]: I0711 00:17:37.397080 2396 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 00:17:37.397180 kubelet[2396]: I0711 00:17:37.397159 2396 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 00:17:37.397225 kubelet[2396]: I0711 00:17:37.397213 2396 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:17:37.397651 kubelet[2396]: E0711 00:17:37.397618 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="200ms" Jul 11 00:17:37.399891 kubelet[2396]: I0711 00:17:37.397923 2396 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:17:37.399891 kubelet[2396]: I0711 00:17:37.397997 2396 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:17:37.399891 kubelet[2396]: W0711 00:17:37.397680 2396 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jul 11 00:17:37.399891 kubelet[2396]: E0711 00:17:37.399229 2396 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:37.401120 kubelet[2396]: I0711 00:17:37.401085 2396 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:17:37.402571 kubelet[2396]: E0711 00:17:37.396522 2396 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.23:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.23:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510a4d5b57cd36 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:17:37.385073974 +0000 UTC m=+0.694354318,LastTimestamp:2025-07-11 00:17:37.385073974 +0000 UTC m=+0.694354318,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:17:37.443883 kubelet[2396]: I0711 00:17:37.443787 2396 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:17:37.454912 kubelet[2396]: I0711 00:17:37.454883 2396 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 00:17:37.455059 kubelet[2396]: I0711 00:17:37.455048 2396 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 00:17:37.455286 kubelet[2396]: I0711 00:17:37.455264 2396 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:17:37.455678 kubelet[2396]: I0711 00:17:37.455027 2396 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:17:37.455879 kubelet[2396]: I0711 00:17:37.455862 2396 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 00:17:37.455970 kubelet[2396]: I0711 00:17:37.455960 2396 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 00:17:37.456089 kubelet[2396]: E0711 00:17:37.456068 2396 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:17:37.457000 kubelet[2396]: W0711 00:17:37.456975 2396 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jul 11 00:17:37.457543 kubelet[2396]: E0711 00:17:37.457082 2396 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:37.460466 kubelet[2396]: I0711 00:17:37.460445 2396 policy_none.go:49] "None policy: Start" Jul 11 00:17:37.462056 kubelet[2396]: I0711 00:17:37.461463 2396 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 00:17:37.462056 kubelet[2396]: I0711 00:17:37.461509 2396 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:17:37.492899 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 00:17:37.498108 kubelet[2396]: E0711 00:17:37.498050 2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:37.556754 kubelet[2396]: E0711 00:17:37.556644 2396 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:17:37.566275 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 00:17:37.578797 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 00:17:37.595411 kubelet[2396]: I0711 00:17:37.595333 2396 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:17:37.595749 kubelet[2396]: I0711 00:17:37.595731 2396 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:17:37.595809 kubelet[2396]: I0711 00:17:37.595749 2396 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:17:37.596134 kubelet[2396]: I0711 00:17:37.596104 2396 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:17:37.599081 kubelet[2396]: E0711 00:17:37.598662 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="400ms" Jul 11 00:17:37.624041 kubelet[2396]: E0711 00:17:37.623940 2396 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:17:37.697641 kubelet[2396]: I0711 00:17:37.697573 2396 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:17:37.698096 kubelet[2396]: E0711 00:17:37.698044 2396 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Jul 11 00:17:37.766305 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 11 00:17:37.795813 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 11 00:17:37.800187 kubelet[2396]: I0711 00:17:37.800144 2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:37.800187 kubelet[2396]: I0711 00:17:37.800178 2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:37.800619 kubelet[2396]: I0711 00:17:37.800205 2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:37.800619 kubelet[2396]: I0711 00:17:37.800224 2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/315cc6b48aca4767541b5b6412fd8271-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"315cc6b48aca4767541b5b6412fd8271\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:37.800619 kubelet[2396]: I0711 00:17:37.800261 2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:37.800619 kubelet[2396]: I0711 00:17:37.800287 2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:37.800619 kubelet[2396]: I0711 00:17:37.800308 2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:17:37.800839 kubelet[2396]: I0711 00:17:37.800325 2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/315cc6b48aca4767541b5b6412fd8271-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"315cc6b48aca4767541b5b6412fd8271\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:37.800839 kubelet[2396]: I0711 00:17:37.800343 2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/315cc6b48aca4767541b5b6412fd8271-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"315cc6b48aca4767541b5b6412fd8271\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:37.814691 systemd[1]: Created slice kubepods-burstable-pod315cc6b48aca4767541b5b6412fd8271.slice - libcontainer container kubepods-burstable-pod315cc6b48aca4767541b5b6412fd8271.slice. Jul 11 00:17:37.900236 kubelet[2396]: I0711 00:17:37.900191 2396 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:17:37.900798 kubelet[2396]: E0711 00:17:37.900756 2396 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Jul 11 00:17:37.999966 kubelet[2396]: E0711 00:17:37.999889 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="800ms" Jul 11 00:17:38.095327 kubelet[2396]: E0711 00:17:38.094627 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:38.095722 containerd[1596]: time="2025-07-11T00:17:38.095646087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 11 00:17:38.112573 kubelet[2396]: E0711 00:17:38.112491 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:38.113350 containerd[1596]: time="2025-07-11T00:17:38.113291438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 11 00:17:38.117684 kubelet[2396]: E0711 00:17:38.117639 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:38.118164 containerd[1596]: time="2025-07-11T00:17:38.118116315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:315cc6b48aca4767541b5b6412fd8271,Namespace:kube-system,Attempt:0,}" Jul 11 00:17:38.303063 kubelet[2396]: I0711 00:17:38.303015 2396 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:17:38.303463 kubelet[2396]: E0711 00:17:38.303416 2396 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Jul 11 00:17:38.482041 kubelet[2396]: W0711 00:17:38.481763 2396 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jul 11 00:17:38.482041 kubelet[2396]: E0711 00:17:38.481907 2396 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:38.550864 containerd[1596]: time="2025-07-11T00:17:38.549813752Z" level=info msg="connecting to shim cdd597a59a83ac49e6e156fe2c2d612855251e6d930a0b8cce8d86a6945979f8" address="unix:///run/containerd/s/08159c60a945d3a14341dff057f2eb2bffb1c459d74c1e93188f100c1b02af20" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:17:38.570879 kubelet[2396]: W0711 00:17:38.564046 2396 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jul 11 00:17:38.570879 kubelet[2396]: E0711 00:17:38.564132 2396 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:38.587942 containerd[1596]: time="2025-07-11T00:17:38.587871770Z" level=info msg="connecting to shim 930d0e4d2beca2cd55f6f04d7854b1f897e85e9894f39b0eb532b355fb06fbeb" address="unix:///run/containerd/s/6d02a36021a4543564a67c085ec5c6fb0b3b7b9c632d4c9e3b595642c4d380f5" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:17:38.624132 containerd[1596]: time="2025-07-11T00:17:38.623582674Z" level=info msg="connecting to shim 31c59d8171c0e90e9db4638f0d593ad26f607481654ec6734209f3d11f57f377" address="unix:///run/containerd/s/e3230db5c41a75c8275ba9a6ca4d95515e058435a9e18cedfe234a443a3fe12d" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:17:38.629206 systemd[1]: Started cri-containerd-cdd597a59a83ac49e6e156fe2c2d612855251e6d930a0b8cce8d86a6945979f8.scope - libcontainer container cdd597a59a83ac49e6e156fe2c2d612855251e6d930a0b8cce8d86a6945979f8. Jul 11 00:17:38.645962 systemd[1]: Started cri-containerd-930d0e4d2beca2cd55f6f04d7854b1f897e85e9894f39b0eb532b355fb06fbeb.scope - libcontainer container 930d0e4d2beca2cd55f6f04d7854b1f897e85e9894f39b0eb532b355fb06fbeb. Jul 11 00:17:38.650893 systemd[1]: Started cri-containerd-31c59d8171c0e90e9db4638f0d593ad26f607481654ec6734209f3d11f57f377.scope - libcontainer container 31c59d8171c0e90e9db4638f0d593ad26f607481654ec6734209f3d11f57f377. Jul 11 00:17:38.726929 kubelet[2396]: W0711 00:17:38.726817 2396 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jul 11 00:17:38.730768 kubelet[2396]: E0711 00:17:38.730515 2396 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:38.742333 containerd[1596]: time="2025-07-11T00:17:38.741890591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"930d0e4d2beca2cd55f6f04d7854b1f897e85e9894f39b0eb532b355fb06fbeb\"" Jul 11 00:17:38.745192 kubelet[2396]: E0711 00:17:38.745159 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:38.748510 containerd[1596]: time="2025-07-11T00:17:38.748463123Z" level=info msg="CreateContainer within sandbox \"930d0e4d2beca2cd55f6f04d7854b1f897e85e9894f39b0eb532b355fb06fbeb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:17:38.778911 containerd[1596]: time="2025-07-11T00:17:38.778845703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdd597a59a83ac49e6e156fe2c2d612855251e6d930a0b8cce8d86a6945979f8\"" Jul 11 00:17:38.779823 kubelet[2396]: E0711 00:17:38.779785 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:38.781684 containerd[1596]: time="2025-07-11T00:17:38.781650862Z" level=info msg="CreateContainer within sandbox \"cdd597a59a83ac49e6e156fe2c2d612855251e6d930a0b8cce8d86a6945979f8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:17:38.801319 kubelet[2396]: E0711 00:17:38.801216 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="1.6s" Jul 11 00:17:38.825056 containerd[1596]: time="2025-07-11T00:17:38.824832710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:315cc6b48aca4767541b5b6412fd8271,Namespace:kube-system,Attempt:0,} returns sandbox id \"31c59d8171c0e90e9db4638f0d593ad26f607481654ec6734209f3d11f57f377\"" Jul 11 00:17:38.826567 kubelet[2396]: E0711 00:17:38.826488 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:38.828375 containerd[1596]: time="2025-07-11T00:17:38.828336569Z" level=info msg="CreateContainer within sandbox \"31c59d8171c0e90e9db4638f0d593ad26f607481654ec6734209f3d11f57f377\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:17:38.841240 kubelet[2396]: W0711 00:17:38.841142 2396 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jul 11 00:17:38.841240 kubelet[2396]: E0711 00:17:38.841221 2396 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:38.894228 containerd[1596]: time="2025-07-11T00:17:38.894147657Z" level=info msg="Container 7f37ca6ecb8f861239f6b4f37a837b4527e304f6ca33ced8129cd9267664e40e: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:17:38.899120 containerd[1596]: time="2025-07-11T00:17:38.899071192Z" level=info msg="Container 545c8b10663d6b04faae29b15080d0054572bba4fe5224316c2c7aceb68b2271: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:17:38.903612 containerd[1596]: time="2025-07-11T00:17:38.903568383Z" level=info msg="Container 4211b522a1e05e96b101e4dd9faa99f785c2fed2d740b42b96bf28f034d5fe48: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:17:38.909973 containerd[1596]: time="2025-07-11T00:17:38.909930137Z" level=info msg="CreateContainer within sandbox \"930d0e4d2beca2cd55f6f04d7854b1f897e85e9894f39b0eb532b355fb06fbeb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7f37ca6ecb8f861239f6b4f37a837b4527e304f6ca33ced8129cd9267664e40e\"" Jul 11 00:17:38.910808 containerd[1596]: time="2025-07-11T00:17:38.910766571Z" level=info msg="StartContainer for \"7f37ca6ecb8f861239f6b4f37a837b4527e304f6ca33ced8129cd9267664e40e\"" Jul 11 00:17:38.911359 containerd[1596]: time="2025-07-11T00:17:38.911334561Z" level=info msg="CreateContainer within sandbox \"cdd597a59a83ac49e6e156fe2c2d612855251e6d930a0b8cce8d86a6945979f8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"545c8b10663d6b04faae29b15080d0054572bba4fe5224316c2c7aceb68b2271\"" Jul 11 00:17:38.911665 containerd[1596]: time="2025-07-11T00:17:38.911640018Z" level=info msg="StartContainer for \"545c8b10663d6b04faae29b15080d0054572bba4fe5224316c2c7aceb68b2271\"" Jul 11 00:17:38.912118 containerd[1596]: time="2025-07-11T00:17:38.912093674Z" level=info msg="connecting to shim 7f37ca6ecb8f861239f6b4f37a837b4527e304f6ca33ced8129cd9267664e40e" address="unix:///run/containerd/s/6d02a36021a4543564a67c085ec5c6fb0b3b7b9c632d4c9e3b595642c4d380f5" protocol=ttrpc version=3 Jul 11 00:17:38.912861 containerd[1596]: time="2025-07-11T00:17:38.912833314Z" level=info msg="connecting to shim 545c8b10663d6b04faae29b15080d0054572bba4fe5224316c2c7aceb68b2271" address="unix:///run/containerd/s/08159c60a945d3a14341dff057f2eb2bffb1c459d74c1e93188f100c1b02af20" protocol=ttrpc version=3 Jul 11 00:17:38.916028 containerd[1596]: time="2025-07-11T00:17:38.915978768Z" level=info msg="CreateContainer within sandbox \"31c59d8171c0e90e9db4638f0d593ad26f607481654ec6734209f3d11f57f377\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4211b522a1e05e96b101e4dd9faa99f785c2fed2d740b42b96bf28f034d5fe48\"" Jul 11 00:17:38.916956 containerd[1596]: time="2025-07-11T00:17:38.916832350Z" level=info msg="StartContainer for \"4211b522a1e05e96b101e4dd9faa99f785c2fed2d740b42b96bf28f034d5fe48\"" Jul 11 00:17:38.920492 containerd[1596]: time="2025-07-11T00:17:38.920456726Z" level=info msg="connecting to shim 4211b522a1e05e96b101e4dd9faa99f785c2fed2d740b42b96bf28f034d5fe48" address="unix:///run/containerd/s/e3230db5c41a75c8275ba9a6ca4d95515e058435a9e18cedfe234a443a3fe12d" protocol=ttrpc version=3 Jul 11 00:17:38.973981 systemd[1]: Started cri-containerd-545c8b10663d6b04faae29b15080d0054572bba4fe5224316c2c7aceb68b2271.scope - libcontainer container 545c8b10663d6b04faae29b15080d0054572bba4fe5224316c2c7aceb68b2271. Jul 11 00:17:38.980113 systemd[1]: Started cri-containerd-4211b522a1e05e96b101e4dd9faa99f785c2fed2d740b42b96bf28f034d5fe48.scope - libcontainer container 4211b522a1e05e96b101e4dd9faa99f785c2fed2d740b42b96bf28f034d5fe48. Jul 11 00:17:38.982926 systemd[1]: Started cri-containerd-7f37ca6ecb8f861239f6b4f37a837b4527e304f6ca33ced8129cd9267664e40e.scope - libcontainer container 7f37ca6ecb8f861239f6b4f37a837b4527e304f6ca33ced8129cd9267664e40e. Jul 11 00:17:39.044450 containerd[1596]: time="2025-07-11T00:17:39.044396106Z" level=info msg="StartContainer for \"545c8b10663d6b04faae29b15080d0054572bba4fe5224316c2c7aceb68b2271\" returns successfully" Jul 11 00:17:39.057653 containerd[1596]: time="2025-07-11T00:17:39.057543043Z" level=info msg="StartContainer for \"4211b522a1e05e96b101e4dd9faa99f785c2fed2d740b42b96bf28f034d5fe48\" returns successfully" Jul 11 00:17:39.066851 containerd[1596]: time="2025-07-11T00:17:39.066777088Z" level=info msg="StartContainer for \"7f37ca6ecb8f861239f6b4f37a837b4527e304f6ca33ced8129cd9267664e40e\" returns successfully" Jul 11 00:17:39.105944 kubelet[2396]: I0711 00:17:39.105896 2396 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:17:39.106587 kubelet[2396]: E0711 00:17:39.106541 2396 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Jul 11 00:17:39.471249 kubelet[2396]: E0711 00:17:39.471100 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:39.474550 kubelet[2396]: E0711 00:17:39.474519 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:39.475686 kubelet[2396]: E0711 00:17:39.475652 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:40.406966 kubelet[2396]: E0711 00:17:40.406901 2396 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 00:17:40.478802 kubelet[2396]: E0711 00:17:40.478766 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:40.708732 kubelet[2396]: I0711 00:17:40.708571 2396 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:17:41.010445 kubelet[2396]: I0711 00:17:41.010286 2396 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 00:17:41.010445 kubelet[2396]: E0711 00:17:41.010326 2396 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 11 00:17:41.074540 kubelet[2396]: E0711 00:17:41.074488 2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:41.175295 kubelet[2396]: E0711 00:17:41.175234 2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:41.257517 kubelet[2396]: E0711 00:17:41.257471 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:41.275819 kubelet[2396]: E0711 00:17:41.275607 2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:41.376619 kubelet[2396]: E0711 00:17:41.376568 2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:42.380095 kubelet[2396]: I0711 00:17:42.379994 2396 apiserver.go:52] "Watching apiserver" Jul 11 00:17:42.398311 kubelet[2396]: I0711 00:17:42.398237 2396 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 00:17:43.618820 systemd[1]: Reload requested from client PID 2685 ('systemctl') (unit session-9.scope)... Jul 11 00:17:43.618838 systemd[1]: Reloading... Jul 11 00:17:43.721756 zram_generator::config[2728]: No configuration found. Jul 11 00:17:43.850097 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:17:44.019305 systemd[1]: Reloading finished in 399 ms. Jul 11 00:17:44.057005 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:17:44.073938 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:17:44.074330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:44.074404 systemd[1]: kubelet.service: Consumed 1.139s CPU time, 133.5M memory peak. Jul 11 00:17:44.076942 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:17:44.346169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:44.360282 (kubelet)[2773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:17:44.437531 kubelet[2773]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:17:44.437531 kubelet[2773]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 00:17:44.437531 kubelet[2773]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:17:44.438027 kubelet[2773]: I0711 00:17:44.437595 2773 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:17:44.446719 kubelet[2773]: I0711 00:17:44.446609 2773 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 00:17:44.446719 kubelet[2773]: I0711 00:17:44.446660 2773 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:17:44.447071 kubelet[2773]: I0711 00:17:44.447027 2773 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 00:17:44.448778 kubelet[2773]: I0711 00:17:44.448738 2773 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 00:17:44.451145 kubelet[2773]: I0711 00:17:44.451099 2773 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:17:44.457249 kubelet[2773]: I0711 00:17:44.457189 2773 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 00:17:44.462431 kubelet[2773]: I0711 00:17:44.462399 2773 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:17:44.462543 kubelet[2773]: I0711 00:17:44.462522 2773 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 00:17:44.462721 kubelet[2773]: I0711 00:17:44.462667 2773 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:17:44.462956 kubelet[2773]: I0711 00:17:44.462721 2773 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:17:44.463051 kubelet[2773]: I0711 00:17:44.462962 2773 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:17:44.463051 kubelet[2773]: I0711 00:17:44.462972 2773 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 00:17:44.463051 kubelet[2773]: I0711 00:17:44.463005 2773 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:17:44.463128 kubelet[2773]: I0711 00:17:44.463123 2773 kubelet.go:408] "Attempting to sync node with API server" Jul 11 00:17:44.463150 kubelet[2773]: I0711 00:17:44.463136 2773 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:17:44.463178 kubelet[2773]: I0711 00:17:44.463164 2773 kubelet.go:314] "Adding apiserver pod source" Jul 11 00:17:44.463178 kubelet[2773]: I0711 00:17:44.463176 2773 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:17:44.464039 kubelet[2773]: I0711 00:17:44.464000 2773 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 11 00:17:44.464743 kubelet[2773]: I0711 00:17:44.464719 2773 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:17:44.465636 kubelet[2773]: I0711 00:17:44.465622 2773 server.go:1274] "Started kubelet" Jul 11 00:17:44.466810 kubelet[2773]: I0711 00:17:44.466778 2773 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:17:44.468837 kubelet[2773]: I0711 00:17:44.467960 2773 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:17:44.469481 kubelet[2773]: I0711 00:17:44.469444 2773 server.go:449] "Adding debug handlers to kubelet server" Jul 11 00:17:44.470721 kubelet[2773]: I0711 00:17:44.470673 2773 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:17:44.471090 kubelet[2773]: I0711 00:17:44.471071 2773 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:17:44.474232 kubelet[2773]: I0711 00:17:44.474188 2773 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:17:44.477303 kubelet[2773]: I0711 00:17:44.477244 2773 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 00:17:44.478798 kubelet[2773]: I0711 00:17:44.478773 2773 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 00:17:44.479295 kubelet[2773]: I0711 00:17:44.479273 2773 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:17:44.480283 kubelet[2773]: E0711 00:17:44.480208 2773 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:44.480561 kubelet[2773]: I0711 00:17:44.480543 2773 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:17:44.480764 kubelet[2773]: I0711 00:17:44.480745 2773 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:17:44.484455 kubelet[2773]: I0711 00:17:44.484426 2773 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:17:44.484682 kubelet[2773]: I0711 00:17:44.484648 2773 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:17:44.487126 kubelet[2773]: E0711 00:17:44.487105 2773 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:17:44.487255 kubelet[2773]: I0711 00:17:44.487198 2773 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:17:44.487339 kubelet[2773]: I0711 00:17:44.487328 2773 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 00:17:44.487411 kubelet[2773]: I0711 00:17:44.487400 2773 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 00:17:44.487507 kubelet[2773]: E0711 00:17:44.487490 2773 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:17:44.522210 kubelet[2773]: I0711 00:17:44.522178 2773 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 00:17:44.522210 kubelet[2773]: I0711 00:17:44.522195 2773 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 00:17:44.522210 kubelet[2773]: I0711 00:17:44.522215 2773 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:17:44.522421 kubelet[2773]: I0711 00:17:44.522363 2773 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:17:44.522421 kubelet[2773]: I0711 00:17:44.522372 2773 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:17:44.522421 kubelet[2773]: I0711 00:17:44.522390 2773 policy_none.go:49] "None policy: Start" Jul 11 00:17:44.523123 kubelet[2773]: I0711 00:17:44.523099 2773 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 00:17:44.523167 kubelet[2773]: I0711 00:17:44.523127 2773 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:17:44.523312 kubelet[2773]: I0711 00:17:44.523293 2773 state_mem.go:75] "Updated machine memory state" Jul 11 00:17:44.528310 kubelet[2773]: I0711 00:17:44.528185 2773 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:17:44.528429 kubelet[2773]: I0711 00:17:44.528404 2773 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:17:44.528461 kubelet[2773]: I0711 00:17:44.528428 2773 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:17:44.528875 kubelet[2773]: I0711 00:17:44.528676 2773 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:17:44.636940 kubelet[2773]: I0711 00:17:44.636769 2773 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:17:44.680017 kubelet[2773]: I0711 00:17:44.679941 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:44.680017 kubelet[2773]: I0711 00:17:44.679993 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:44.680017 kubelet[2773]: I0711 00:17:44.680026 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:44.680296 kubelet[2773]: I0711 00:17:44.680141 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/315cc6b48aca4767541b5b6412fd8271-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"315cc6b48aca4767541b5b6412fd8271\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:44.680296 kubelet[2773]: I0711 00:17:44.680199 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/315cc6b48aca4767541b5b6412fd8271-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"315cc6b48aca4767541b5b6412fd8271\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:44.680296 kubelet[2773]: I0711 00:17:44.680216 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/315cc6b48aca4767541b5b6412fd8271-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"315cc6b48aca4767541b5b6412fd8271\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:44.680296 kubelet[2773]: I0711 00:17:44.680231 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:44.680296 kubelet[2773]: I0711 00:17:44.680245 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:44.680441 kubelet[2773]: I0711 00:17:44.680271 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:17:44.806053 kubelet[2773]: E0711 00:17:44.805979 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:44.806227 kubelet[2773]: E0711 00:17:44.806086 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:44.806227 kubelet[2773]: E0711 00:17:44.806171 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:44.982038 kubelet[2773]: I0711 00:17:44.980983 2773 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 11 00:17:44.982038 kubelet[2773]: I0711 00:17:44.981105 2773 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 00:17:45.283982 sudo[2813]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 11 00:17:45.284467 sudo[2813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 11 00:17:45.463883 kubelet[2773]: I0711 00:17:45.463814 2773 apiserver.go:52] "Watching apiserver" Jul 11 00:17:45.480137 kubelet[2773]: I0711 00:17:45.480075 2773 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 00:17:45.502796 kubelet[2773]: E0711 00:17:45.502740 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:45.660340 kubelet[2773]: E0711 00:17:45.660066 2773 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:45.660685 kubelet[2773]: E0711 00:17:45.660506 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:45.663734 kubelet[2773]: E0711 00:17:45.663665 2773 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:45.663964 kubelet[2773]: E0711 00:17:45.663939 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:45.879963 sudo[2813]: pam_unix(sudo:session): session closed for user root Jul 11 00:17:46.190630 kubelet[2773]: I0711 00:17:46.190414 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.190380254 podStartE2EDuration="2.190380254s" podCreationTimestamp="2025-07-11 00:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:17:45.663498227 +0000 UTC m=+1.298098277" watchObservedRunningTime="2025-07-11 00:17:46.190380254 +0000 UTC m=+1.824980304" Jul 11 00:17:46.216348 kubelet[2773]: I0711 00:17:46.216219 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.216186814 podStartE2EDuration="2.216186814s" podCreationTimestamp="2025-07-11 00:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:17:46.21558898 +0000 UTC m=+1.850189040" watchObservedRunningTime="2025-07-11 00:17:46.216186814 +0000 UTC m=+1.850786854" Jul 11 00:17:46.216662 kubelet[2773]: I0711 00:17:46.216393 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.216385206 podStartE2EDuration="2.216385206s" podCreationTimestamp="2025-07-11 00:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:17:46.19053201 +0000 UTC m=+1.825132060" watchObservedRunningTime="2025-07-11 00:17:46.216385206 +0000 UTC m=+1.850985276" Jul 11 00:17:46.504237 kubelet[2773]: E0711 00:17:46.504074 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:46.504237 kubelet[2773]: E0711 00:17:46.504130 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:47.506641 kubelet[2773]: E0711 00:17:47.506589 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:47.507283 kubelet[2773]: E0711 00:17:47.507259 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:48.294430 sudo[1828]: pam_unix(sudo:session): session closed for user root Jul 11 00:17:48.296233 sshd[1827]: Connection closed by 10.0.0.1 port 44500 Jul 11 00:17:48.296967 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:48.302297 systemd[1]: sshd@8-10.0.0.23:22-10.0.0.1:44500.service: Deactivated successfully. Jul 11 00:17:48.305443 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:17:48.305751 systemd[1]: session-9.scope: Consumed 5.473s CPU time, 260.4M memory peak. Jul 11 00:17:48.307676 systemd-logind[1582]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:17:48.309296 systemd-logind[1582]: Removed session 9. Jul 11 00:17:48.551215 kubelet[2773]: I0711 00:17:48.551053 2773 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:17:48.552088 kubelet[2773]: I0711 00:17:48.551774 2773 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:17:48.552136 containerd[1596]: time="2025-07-11T00:17:48.551520567Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:17:50.083192 systemd[1]: Created slice kubepods-besteffort-podd5142094_9c90_41ac_b412_1e84ed94ec78.slice - libcontainer container kubepods-besteffort-podd5142094_9c90_41ac_b412_1e84ed94ec78.slice. Jul 11 00:17:50.213913 kubelet[2773]: I0711 00:17:50.213849 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx4p4\" (UniqueName: \"kubernetes.io/projected/d5142094-9c90-41ac-b412-1e84ed94ec78-kube-api-access-hx4p4\") pod \"kube-proxy-qggjq\" (UID: \"d5142094-9c90-41ac-b412-1e84ed94ec78\") " pod="kube-system/kube-proxy-qggjq" Jul 11 00:17:50.213913 kubelet[2773]: I0711 00:17:50.213901 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5142094-9c90-41ac-b412-1e84ed94ec78-xtables-lock\") pod \"kube-proxy-qggjq\" (UID: \"d5142094-9c90-41ac-b412-1e84ed94ec78\") " pod="kube-system/kube-proxy-qggjq" Jul 11 00:17:50.214508 kubelet[2773]: I0711 00:17:50.213925 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d5142094-9c90-41ac-b412-1e84ed94ec78-kube-proxy\") pod \"kube-proxy-qggjq\" (UID: \"d5142094-9c90-41ac-b412-1e84ed94ec78\") " pod="kube-system/kube-proxy-qggjq" Jul 11 00:17:50.214508 kubelet[2773]: I0711 00:17:50.213986 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5142094-9c90-41ac-b412-1e84ed94ec78-lib-modules\") pod \"kube-proxy-qggjq\" (UID: \"d5142094-9c90-41ac-b412-1e84ed94ec78\") " pod="kube-system/kube-proxy-qggjq" Jul 11 00:17:51.086203 systemd[1]: Created slice kubepods-burstable-pod5a380c84_61e2_41b6_b55a_1e950e98d990.slice - libcontainer container kubepods-burstable-pod5a380c84_61e2_41b6_b55a_1e950e98d990.slice. Jul 11 00:17:51.219895 kubelet[2773]: I0711 00:17:51.219825 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-hostproc\") pod \"cilium-x285l\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " pod="kube-system/cilium-x285l" Jul 11 00:17:51.219895 kubelet[2773]: I0711 00:17:51.219875 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-cilium-cgroup\") pod \"cilium-x285l\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " pod="kube-system/cilium-x285l" Jul 11 00:17:51.219895 kubelet[2773]: I0711 00:17:51.219897 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5a380c84-61e2-41b6-b55a-1e950e98d990-clustermesh-secrets\") pod \"cilium-x285l\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " pod="kube-system/cilium-x285l" Jul 11 00:17:51.219895 kubelet[2773]: I0711 00:17:51.219913 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a380c84-61e2-41b6-b55a-1e950e98d990-cilium-config-path\") pod \"cilium-x285l\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " pod="kube-system/cilium-x285l" Jul 11 00:17:51.220481 kubelet[2773]: I0711 00:17:51.219934 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-host-proc-sys-kernel\") pod \"cilium-x285l\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " pod="kube-system/cilium-x285l" Jul 11 00:17:51.220481 kubelet[2773]: I0711 00:17:51.219949 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5a380c84-61e2-41b6-b55a-1e950e98d990-hubble-tls\") pod \"cilium-x285l\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " pod="kube-system/cilium-x285l" Jul 11 00:17:51.220481 kubelet[2773]: I0711 00:17:51.219963 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-bpf-maps\") pod \"cilium-x285l\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " pod="kube-system/cilium-x285l" Jul 11 00:17:51.220481 kubelet[2773]: I0711 00:17:51.219977 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-cilium-run\") pod \"cilium-x285l\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " pod="kube-system/cilium-x285l" Jul 11 00:17:51.220481 kubelet[2773]: I0711 00:17:51.219991 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-cni-path\") pod \"cilium-x285l\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " pod="kube-system/cilium-x285l" Jul 11 00:17:51.220481 kubelet[2773]: I0711 00:17:51.220003 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-lib-modules\") pod \"cilium-x285l\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " pod="kube-system/cilium-x285l" Jul 11 00:17:51.220624 kubelet[2773]: I0711 00:17:51.220017 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-xtables-lock\") pod \"cilium-x285l\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " pod="kube-system/cilium-x285l" Jul 11 00:17:51.220624 kubelet[2773]: I0711 00:17:51.220042 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-etc-cni-netd\") pod \"cilium-x285l\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " pod="kube-system/cilium-x285l" Jul 11 00:17:51.220624 kubelet[2773]: I0711 00:17:51.220056 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bjg8\" (UniqueName: \"kubernetes.io/projected/5a380c84-61e2-41b6-b55a-1e950e98d990-kube-api-access-5bjg8\") pod \"cilium-x285l\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " pod="kube-system/cilium-x285l" Jul 11 00:17:51.220624 kubelet[2773]: I0711 00:17:51.220080 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-host-proc-sys-net\") pod \"cilium-x285l\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " pod="kube-system/cilium-x285l" Jul 11 00:17:52.846200 kubelet[2773]: E0711 00:17:52.846141 2773 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 11 00:17:52.846200 kubelet[2773]: E0711 00:17:52.846192 2773 projected.go:194] Error preparing data for projected volume kube-api-access-hx4p4 for pod kube-system/kube-proxy-qggjq: configmap "kube-root-ca.crt" not found Jul 11 00:17:52.846970 kubelet[2773]: E0711 00:17:52.846296 2773 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d5142094-9c90-41ac-b412-1e84ed94ec78-kube-api-access-hx4p4 podName:d5142094-9c90-41ac-b412-1e84ed94ec78 nodeName:}" failed. No retries permitted until 2025-07-11 00:17:53.346255844 +0000 UTC m=+8.980855894 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hx4p4" (UniqueName: "kubernetes.io/projected/d5142094-9c90-41ac-b412-1e84ed94ec78-kube-api-access-hx4p4") pod "kube-proxy-qggjq" (UID: "d5142094-9c90-41ac-b412-1e84ed94ec78") : configmap "kube-root-ca.crt" not found Jul 11 00:17:52.855432 kubelet[2773]: E0711 00:17:52.855384 2773 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 11 00:17:52.855432 kubelet[2773]: E0711 00:17:52.855419 2773 projected.go:194] Error preparing data for projected volume kube-api-access-5bjg8 for pod kube-system/cilium-x285l: configmap "kube-root-ca.crt" not found Jul 11 00:17:52.855661 kubelet[2773]: E0711 00:17:52.855475 2773 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a380c84-61e2-41b6-b55a-1e950e98d990-kube-api-access-5bjg8 podName:5a380c84-61e2-41b6-b55a-1e950e98d990 nodeName:}" failed. No retries permitted until 2025-07-11 00:17:53.355453546 +0000 UTC m=+8.990053596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5bjg8" (UniqueName: "kubernetes.io/projected/5a380c84-61e2-41b6-b55a-1e950e98d990-kube-api-access-5bjg8") pod "cilium-x285l" (UID: "5a380c84-61e2-41b6-b55a-1e950e98d990") : configmap "kube-root-ca.crt" not found Jul 11 00:17:53.484994 kubelet[2773]: E0711 00:17:53.484939 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:53.489752 kubelet[2773]: E0711 00:17:53.489681 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:53.490430 containerd[1596]: time="2025-07-11T00:17:53.490388707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x285l,Uid:5a380c84-61e2-41b6-b55a-1e950e98d990,Namespace:kube-system,Attempt:0,}" Jul 11 00:17:53.684927 kubelet[2773]: E0711 00:17:53.684857 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:53.703752 kubelet[2773]: E0711 00:17:53.703107 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:53.704031 containerd[1596]: time="2025-07-11T00:17:53.703970015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qggjq,Uid:d5142094-9c90-41ac-b412-1e84ed94ec78,Namespace:kube-system,Attempt:0,}" Jul 11 00:17:53.713578 systemd[1]: Created slice kubepods-besteffort-pod5e30c967_5f21_467d_aa3c_66bac9e1b9d8.slice - libcontainer container kubepods-besteffort-pod5e30c967_5f21_467d_aa3c_66bac9e1b9d8.slice. Jul 11 00:17:53.740417 kubelet[2773]: I0711 00:17:53.740150 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gz96\" (UniqueName: \"kubernetes.io/projected/5e30c967-5f21-467d-aa3c-66bac9e1b9d8-kube-api-access-8gz96\") pod \"cilium-operator-5d85765b45-v64ww\" (UID: \"5e30c967-5f21-467d-aa3c-66bac9e1b9d8\") " pod="kube-system/cilium-operator-5d85765b45-v64ww" Jul 11 00:17:53.740417 kubelet[2773]: I0711 00:17:53.740297 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e30c967-5f21-467d-aa3c-66bac9e1b9d8-cilium-config-path\") pod \"cilium-operator-5d85765b45-v64ww\" (UID: \"5e30c967-5f21-467d-aa3c-66bac9e1b9d8\") " pod="kube-system/cilium-operator-5d85765b45-v64ww" Jul 11 00:17:54.017911 kubelet[2773]: E0711 00:17:54.017731 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:54.018555 containerd[1596]: time="2025-07-11T00:17:54.018449547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-v64ww,Uid:5e30c967-5f21-467d-aa3c-66bac9e1b9d8,Namespace:kube-system,Attempt:0,}" Jul 11 00:17:54.365348 containerd[1596]: time="2025-07-11T00:17:54.365284618Z" level=info msg="connecting to shim a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f" address="unix:///run/containerd/s/b5ad77a5efd88297686ab91bafb9ed525e6a2fe6966f9adb1cd14895b311ce7f" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:17:54.402262 kubelet[2773]: E0711 00:17:54.402216 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:54.407122 systemd[1]: Started cri-containerd-a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f.scope - libcontainer container a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f. Jul 11 00:17:54.433241 containerd[1596]: time="2025-07-11T00:17:54.433159989Z" level=info msg="connecting to shim 525194c28573293b2d564fa46b9d09857b1ed6c1a8ea2749d5ca51e31709068b" address="unix:///run/containerd/s/96adfd772209020f1b7d84e35be98968b89896c1155e9f6c14f24856f9c58608" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:17:54.454855 containerd[1596]: time="2025-07-11T00:17:54.454777942Z" level=info msg="connecting to shim b056e1e4d173f7ba42ddea423da83dddc8a202987d298d4771b7015cd6886cf2" address="unix:///run/containerd/s/d7c984f14c526896d478cc13c066cbbec279b79157d784b5a9463ee38142d085" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:17:54.468733 containerd[1596]: time="2025-07-11T00:17:54.467338956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x285l,Uid:5a380c84-61e2-41b6-b55a-1e950e98d990,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f\"" Jul 11 00:17:54.471080 kubelet[2773]: E0711 00:17:54.471032 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:54.473442 containerd[1596]: time="2025-07-11T00:17:54.473357476Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 11 00:17:54.486062 systemd[1]: Started cri-containerd-525194c28573293b2d564fa46b9d09857b1ed6c1a8ea2749d5ca51e31709068b.scope - libcontainer container 525194c28573293b2d564fa46b9d09857b1ed6c1a8ea2749d5ca51e31709068b. Jul 11 00:17:54.503940 systemd[1]: Started cri-containerd-b056e1e4d173f7ba42ddea423da83dddc8a202987d298d4771b7015cd6886cf2.scope - libcontainer container b056e1e4d173f7ba42ddea423da83dddc8a202987d298d4771b7015cd6886cf2. Jul 11 00:17:54.520189 kubelet[2773]: E0711 00:17:54.520134 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:54.523116 kubelet[2773]: E0711 00:17:54.523086 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:54.729646 containerd[1596]: time="2025-07-11T00:17:54.729483435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qggjq,Uid:d5142094-9c90-41ac-b412-1e84ed94ec78,Namespace:kube-system,Attempt:0,} returns sandbox id \"525194c28573293b2d564fa46b9d09857b1ed6c1a8ea2749d5ca51e31709068b\"" Jul 11 00:17:54.730425 kubelet[2773]: E0711 00:17:54.730382 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:54.733483 containerd[1596]: time="2025-07-11T00:17:54.733426610Z" level=info msg="CreateContainer within sandbox \"525194c28573293b2d564fa46b9d09857b1ed6c1a8ea2749d5ca51e31709068b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:17:54.825055 containerd[1596]: time="2025-07-11T00:17:54.824948584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-v64ww,Uid:5e30c967-5f21-467d-aa3c-66bac9e1b9d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b056e1e4d173f7ba42ddea423da83dddc8a202987d298d4771b7015cd6886cf2\"" Jul 11 00:17:54.832078 kubelet[2773]: E0711 00:17:54.826402 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:54.933969 containerd[1596]: time="2025-07-11T00:17:54.933898755Z" level=info msg="Container 8355b551a5b8b8914aad91ebcef054e9f27e10cbc8af2adc1774fb6933c8f740: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:17:54.951548 containerd[1596]: time="2025-07-11T00:17:54.951310267Z" level=info msg="CreateContainer within sandbox \"525194c28573293b2d564fa46b9d09857b1ed6c1a8ea2749d5ca51e31709068b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8355b551a5b8b8914aad91ebcef054e9f27e10cbc8af2adc1774fb6933c8f740\"" Jul 11 00:17:54.952452 containerd[1596]: time="2025-07-11T00:17:54.952396326Z" level=info msg="StartContainer for \"8355b551a5b8b8914aad91ebcef054e9f27e10cbc8af2adc1774fb6933c8f740\"" Jul 11 00:17:54.954330 containerd[1596]: time="2025-07-11T00:17:54.954286030Z" level=info msg="connecting to shim 8355b551a5b8b8914aad91ebcef054e9f27e10cbc8af2adc1774fb6933c8f740" address="unix:///run/containerd/s/96adfd772209020f1b7d84e35be98968b89896c1155e9f6c14f24856f9c58608" protocol=ttrpc version=3 Jul 11 00:17:54.983034 systemd[1]: Started cri-containerd-8355b551a5b8b8914aad91ebcef054e9f27e10cbc8af2adc1774fb6933c8f740.scope - libcontainer container 8355b551a5b8b8914aad91ebcef054e9f27e10cbc8af2adc1774fb6933c8f740. Jul 11 00:17:55.036980 containerd[1596]: time="2025-07-11T00:17:55.036933185Z" level=info msg="StartContainer for \"8355b551a5b8b8914aad91ebcef054e9f27e10cbc8af2adc1774fb6933c8f740\" returns successfully" Jul 11 00:17:55.524939 kubelet[2773]: E0711 00:17:55.524890 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:56.173562 kubelet[2773]: E0711 00:17:56.173516 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:56.418087 kubelet[2773]: I0711 00:17:56.417906 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qggjq" podStartSLOduration=8.41787859 podStartE2EDuration="8.41787859s" podCreationTimestamp="2025-07-11 00:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:17:55.6778699 +0000 UTC m=+11.312469940" watchObservedRunningTime="2025-07-11 00:17:56.41787859 +0000 UTC m=+12.052478640" Jul 11 00:18:03.706565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2027508311.mount: Deactivated successfully. Jul 11 00:18:11.827685 containerd[1596]: time="2025-07-11T00:18:11.827587992Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:11.851962 containerd[1596]: time="2025-07-11T00:18:11.851891497Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 11 00:18:11.906967 containerd[1596]: time="2025-07-11T00:18:11.906853929Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:11.908582 containerd[1596]: time="2025-07-11T00:18:11.908444032Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 17.435040774s" Jul 11 00:18:11.908582 containerd[1596]: time="2025-07-11T00:18:11.908476666Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 11 00:18:11.915815 containerd[1596]: time="2025-07-11T00:18:11.915770336Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 11 00:18:11.941970 containerd[1596]: time="2025-07-11T00:18:11.941911058Z" level=info msg="CreateContainer within sandbox \"a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 00:18:11.964978 containerd[1596]: time="2025-07-11T00:18:11.964886852Z" level=info msg="Container 284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:18:11.969846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2196907964.mount: Deactivated successfully. Jul 11 00:18:11.979693 containerd[1596]: time="2025-07-11T00:18:11.979375320Z" level=info msg="CreateContainer within sandbox \"a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def\"" Jul 11 00:18:11.980218 containerd[1596]: time="2025-07-11T00:18:11.980155016Z" level=info msg="StartContainer for \"284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def\"" Jul 11 00:18:11.981431 containerd[1596]: time="2025-07-11T00:18:11.981388471Z" level=info msg="connecting to shim 284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def" address="unix:///run/containerd/s/b5ad77a5efd88297686ab91bafb9ed525e6a2fe6966f9adb1cd14895b311ce7f" protocol=ttrpc version=3 Jul 11 00:18:12.058375 systemd[1]: Started cri-containerd-284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def.scope - libcontainer container 284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def. Jul 11 00:18:12.103688 containerd[1596]: time="2025-07-11T00:18:12.103524804Z" level=info msg="StartContainer for \"284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def\" returns successfully" Jul 11 00:18:12.116828 systemd[1]: cri-containerd-284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def.scope: Deactivated successfully. Jul 11 00:18:12.118968 containerd[1596]: time="2025-07-11T00:18:12.118871568Z" level=info msg="received exit event container_id:\"284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def\" id:\"284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def\" pid:3196 exited_at:{seconds:1752193092 nanos:118289119}" Jul 11 00:18:12.118968 containerd[1596]: time="2025-07-11T00:18:12.118935132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def\" id:\"284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def\" pid:3196 exited_at:{seconds:1752193092 nanos:118289119}" Jul 11 00:18:12.147312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def-rootfs.mount: Deactivated successfully. Jul 11 00:18:12.563021 kubelet[2773]: E0711 00:18:12.562955 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:13.564831 kubelet[2773]: E0711 00:18:13.564774 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:14.571689 kubelet[2773]: E0711 00:18:14.571629 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:14.573826 containerd[1596]: time="2025-07-11T00:18:14.573693633Z" level=info msg="CreateContainer within sandbox \"a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 00:18:14.594728 containerd[1596]: time="2025-07-11T00:18:14.594645975Z" level=info msg="Container 603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:18:14.608817 containerd[1596]: time="2025-07-11T00:18:14.608730223Z" level=info msg="CreateContainer within sandbox \"a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9\"" Jul 11 00:18:14.611362 containerd[1596]: time="2025-07-11T00:18:14.609987196Z" level=info msg="StartContainer for \"603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9\"" Jul 11 00:18:14.611362 containerd[1596]: time="2025-07-11T00:18:14.611050982Z" level=info msg="connecting to shim 603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9" address="unix:///run/containerd/s/b5ad77a5efd88297686ab91bafb9ed525e6a2fe6966f9adb1cd14895b311ce7f" protocol=ttrpc version=3 Jul 11 00:18:14.643997 systemd[1]: Started cri-containerd-603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9.scope - libcontainer container 603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9. Jul 11 00:18:14.760775 containerd[1596]: time="2025-07-11T00:18:14.760688246Z" level=info msg="StartContainer for \"603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9\" returns successfully" Jul 11 00:18:14.926337 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:18:14.926644 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:18:14.927571 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:18:14.929493 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:18:14.931046 containerd[1596]: time="2025-07-11T00:18:14.931002614Z" level=info msg="received exit event container_id:\"603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9\" id:\"603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9\" pid:3240 exited_at:{seconds:1752193094 nanos:930773857}" Jul 11 00:18:14.931329 containerd[1596]: time="2025-07-11T00:18:14.931279194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9\" id:\"603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9\" pid:3240 exited_at:{seconds:1752193094 nanos:930773857}" Jul 11 00:18:14.931947 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 11 00:18:14.932542 systemd[1]: cri-containerd-603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9.scope: Deactivated successfully. Jul 11 00:18:14.952931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9-rootfs.mount: Deactivated successfully. Jul 11 00:18:15.033800 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:18:15.575891 kubelet[2773]: E0711 00:18:15.575847 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:15.580608 containerd[1596]: time="2025-07-11T00:18:15.580542673Z" level=info msg="CreateContainer within sandbox \"a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 00:18:15.626889 containerd[1596]: time="2025-07-11T00:18:15.626823348Z" level=info msg="Container 6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:18:15.640380 containerd[1596]: time="2025-07-11T00:18:15.640314652Z" level=info msg="CreateContainer within sandbox \"a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7\"" Jul 11 00:18:15.641095 containerd[1596]: time="2025-07-11T00:18:15.641045627Z" level=info msg="StartContainer for \"6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7\"" Jul 11 00:18:15.643170 containerd[1596]: time="2025-07-11T00:18:15.643127006Z" level=info msg="connecting to shim 6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7" address="unix:///run/containerd/s/b5ad77a5efd88297686ab91bafb9ed525e6a2fe6966f9adb1cd14895b311ce7f" protocol=ttrpc version=3 Jul 11 00:18:15.668896 systemd[1]: Started cri-containerd-6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7.scope - libcontainer container 6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7. Jul 11 00:18:15.717828 systemd[1]: cri-containerd-6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7.scope: Deactivated successfully. Jul 11 00:18:15.744865 containerd[1596]: time="2025-07-11T00:18:15.725848099Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7\" id:\"6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7\" pid:3287 exited_at:{seconds:1752193095 nanos:719007591}" Jul 11 00:18:15.891544 containerd[1596]: time="2025-07-11T00:18:15.891077640Z" level=info msg="received exit event container_id:\"6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7\" id:\"6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7\" pid:3287 exited_at:{seconds:1752193095 nanos:719007591}" Jul 11 00:18:15.901325 containerd[1596]: time="2025-07-11T00:18:15.901290242Z" level=info msg="StartContainer for \"6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7\" returns successfully" Jul 11 00:18:15.915514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7-rootfs.mount: Deactivated successfully. Jul 11 00:18:16.580408 kubelet[2773]: E0711 00:18:16.580366 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:16.582511 containerd[1596]: time="2025-07-11T00:18:16.582461275Z" level=info msg="CreateContainer within sandbox \"a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 00:18:17.128580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3558430771.mount: Deactivated successfully. Jul 11 00:18:17.129069 containerd[1596]: time="2025-07-11T00:18:17.129030005Z" level=info msg="Container c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:18:17.154840 containerd[1596]: time="2025-07-11T00:18:17.154754183Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:17.155947 containerd[1596]: time="2025-07-11T00:18:17.155882840Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 11 00:18:17.157111 containerd[1596]: time="2025-07-11T00:18:17.157039701Z" level=info msg="CreateContainer within sandbox \"a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569\"" Jul 11 00:18:17.157741 containerd[1596]: time="2025-07-11T00:18:17.157676831Z" level=info msg="StartContainer for \"c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569\"" Jul 11 00:18:17.158882 containerd[1596]: time="2025-07-11T00:18:17.158836177Z" level=info msg="connecting to shim c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569" address="unix:///run/containerd/s/b5ad77a5efd88297686ab91bafb9ed525e6a2fe6966f9adb1cd14895b311ce7f" protocol=ttrpc version=3 Jul 11 00:18:17.159135 containerd[1596]: time="2025-07-11T00:18:17.159103658Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:17.161674 containerd[1596]: time="2025-07-11T00:18:17.161055506Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.245248007s" Jul 11 00:18:17.161674 containerd[1596]: time="2025-07-11T00:18:17.161145390Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 11 00:18:17.166998 containerd[1596]: time="2025-07-11T00:18:17.166939547Z" level=info msg="CreateContainer within sandbox \"b056e1e4d173f7ba42ddea423da83dddc8a202987d298d4771b7015cd6886cf2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 11 00:18:17.184899 systemd[1]: Started cri-containerd-c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569.scope - libcontainer container c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569. Jul 11 00:18:17.221132 systemd[1]: cri-containerd-c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569.scope: Deactivated successfully. Jul 11 00:18:17.221854 containerd[1596]: time="2025-07-11T00:18:17.221807099Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569\" id:\"c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569\" pid:3341 exited_at:{seconds:1752193097 nanos:221528386}" Jul 11 00:18:17.309915 containerd[1596]: time="2025-07-11T00:18:17.309860660Z" level=info msg="received exit event container_id:\"c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569\" id:\"c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569\" pid:3341 exited_at:{seconds:1752193097 nanos:221528386}" Jul 11 00:18:17.319478 containerd[1596]: time="2025-07-11T00:18:17.319422818Z" level=info msg="StartContainer for \"c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569\" returns successfully" Jul 11 00:18:17.329632 containerd[1596]: time="2025-07-11T00:18:17.329579573Z" level=info msg="Container 1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:18:17.586443 kubelet[2773]: E0711 00:18:17.586399 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:17.596071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569-rootfs.mount: Deactivated successfully. Jul 11 00:18:17.819171 containerd[1596]: time="2025-07-11T00:18:17.819094608Z" level=info msg="CreateContainer within sandbox \"b056e1e4d173f7ba42ddea423da83dddc8a202987d298d4771b7015cd6886cf2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895\"" Jul 11 00:18:17.819879 containerd[1596]: time="2025-07-11T00:18:17.819819388Z" level=info msg="StartContainer for \"1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895\"" Jul 11 00:18:17.820964 containerd[1596]: time="2025-07-11T00:18:17.820936221Z" level=info msg="connecting to shim 1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895" address="unix:///run/containerd/s/d7c984f14c526896d478cc13c066cbbec279b79157d784b5a9463ee38142d085" protocol=ttrpc version=3 Jul 11 00:18:17.843062 systemd[1]: Started cri-containerd-1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895.scope - libcontainer container 1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895. Jul 11 00:18:17.891882 containerd[1596]: time="2025-07-11T00:18:17.891828829Z" level=info msg="StartContainer for \"1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895\" returns successfully" Jul 11 00:18:18.593254 kubelet[2773]: E0711 00:18:18.593206 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:18.595866 containerd[1596]: time="2025-07-11T00:18:18.595811102Z" level=info msg="CreateContainer within sandbox \"a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 00:18:18.597297 kubelet[2773]: E0711 00:18:18.597273 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:19.179761 containerd[1596]: time="2025-07-11T00:18:19.179683940Z" level=info msg="Container 4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:18:19.185006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1294090642.mount: Deactivated successfully. Jul 11 00:18:19.353679 containerd[1596]: time="2025-07-11T00:18:19.353611384Z" level=info msg="CreateContainer within sandbox \"a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236\"" Jul 11 00:18:19.354813 containerd[1596]: time="2025-07-11T00:18:19.354287989Z" level=info msg="StartContainer for \"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236\"" Jul 11 00:18:19.355817 containerd[1596]: time="2025-07-11T00:18:19.355772383Z" level=info msg="connecting to shim 4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236" address="unix:///run/containerd/s/b5ad77a5efd88297686ab91bafb9ed525e6a2fe6966f9adb1cd14895b311ce7f" protocol=ttrpc version=3 Jul 11 00:18:19.403006 systemd[1]: Started cri-containerd-4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236.scope - libcontainer container 4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236. Jul 11 00:18:19.499134 containerd[1596]: time="2025-07-11T00:18:19.498934796Z" level=info msg="StartContainer for \"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236\" returns successfully" Jul 11 00:18:19.596741 containerd[1596]: time="2025-07-11T00:18:19.596664566Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236\" id:\"d1f594608a60289b2afc44e1357da49e477da71b223c4f8cd25114470818e01f\" pid:3443 exited_at:{seconds:1752193099 nanos:595987201}" Jul 11 00:18:19.608667 kubelet[2773]: I0711 00:18:19.608581 2773 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 11 00:18:19.611342 kubelet[2773]: E0711 00:18:19.611184 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:19.657277 kubelet[2773]: I0711 00:18:19.657184 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-v64ww" podStartSLOduration=4.322077023 podStartE2EDuration="26.657155851s" podCreationTimestamp="2025-07-11 00:17:53 +0000 UTC" firstStartedPulling="2025-07-11 00:17:54.827151704 +0000 UTC m=+10.461751754" lastFinishedPulling="2025-07-11 00:18:17.162230532 +0000 UTC m=+32.796830582" observedRunningTime="2025-07-11 00:18:19.410658279 +0000 UTC m=+35.045258329" watchObservedRunningTime="2025-07-11 00:18:19.657155851 +0000 UTC m=+35.291755901" Jul 11 00:18:19.678874 systemd[1]: Created slice kubepods-burstable-pod308cd537_69f3_4181_b21d_9ff795367d29.slice - libcontainer container kubepods-burstable-pod308cd537_69f3_4181_b21d_9ff795367d29.slice. Jul 11 00:18:19.690415 systemd[1]: Created slice kubepods-burstable-pod2b95a2b9_f075_4dac_a0ae_da03c727bec1.slice - libcontainer container kubepods-burstable-pod2b95a2b9_f075_4dac_a0ae_da03c727bec1.slice. Jul 11 00:18:19.791267 kubelet[2773]: I0711 00:18:19.790733 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b95a2b9-f075-4dac-a0ae-da03c727bec1-config-volume\") pod \"coredns-7c65d6cfc9-nm6l7\" (UID: \"2b95a2b9-f075-4dac-a0ae-da03c727bec1\") " pod="kube-system/coredns-7c65d6cfc9-nm6l7" Jul 11 00:18:19.791267 kubelet[2773]: I0711 00:18:19.790802 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2phx\" (UniqueName: \"kubernetes.io/projected/308cd537-69f3-4181-b21d-9ff795367d29-kube-api-access-m2phx\") pod \"coredns-7c65d6cfc9-ls79k\" (UID: \"308cd537-69f3-4181-b21d-9ff795367d29\") " pod="kube-system/coredns-7c65d6cfc9-ls79k" Jul 11 00:18:19.791267 kubelet[2773]: I0711 00:18:19.790827 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/308cd537-69f3-4181-b21d-9ff795367d29-config-volume\") pod \"coredns-7c65d6cfc9-ls79k\" (UID: \"308cd537-69f3-4181-b21d-9ff795367d29\") " pod="kube-system/coredns-7c65d6cfc9-ls79k" Jul 11 00:18:19.791267 kubelet[2773]: I0711 00:18:19.790866 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh9ts\" (UniqueName: \"kubernetes.io/projected/2b95a2b9-f075-4dac-a0ae-da03c727bec1-kube-api-access-vh9ts\") pod \"coredns-7c65d6cfc9-nm6l7\" (UID: \"2b95a2b9-f075-4dac-a0ae-da03c727bec1\") " pod="kube-system/coredns-7c65d6cfc9-nm6l7" Jul 11 00:18:19.986074 kubelet[2773]: E0711 00:18:19.985979 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:19.994404 kubelet[2773]: E0711 00:18:19.994362 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:19.997915 containerd[1596]: time="2025-07-11T00:18:19.997858638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nm6l7,Uid:2b95a2b9-f075-4dac-a0ae-da03c727bec1,Namespace:kube-system,Attempt:0,}" Jul 11 00:18:19.998220 containerd[1596]: time="2025-07-11T00:18:19.998140996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ls79k,Uid:308cd537-69f3-4181-b21d-9ff795367d29,Namespace:kube-system,Attempt:0,}" Jul 11 00:18:20.613413 kubelet[2773]: E0711 00:18:20.613371 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:21.615212 kubelet[2773]: E0711 00:18:21.615151 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:22.098412 systemd-networkd[1484]: cilium_host: Link UP Jul 11 00:18:22.098793 systemd-networkd[1484]: cilium_net: Link UP Jul 11 00:18:22.099008 systemd-networkd[1484]: cilium_net: Gained carrier Jul 11 00:18:22.099179 systemd-networkd[1484]: cilium_host: Gained carrier Jul 11 00:18:22.254597 systemd-networkd[1484]: cilium_vxlan: Link UP Jul 11 00:18:22.254618 systemd-networkd[1484]: cilium_vxlan: Gained carrier Jul 11 00:18:22.312985 systemd-networkd[1484]: cilium_host: Gained IPv6LL Jul 11 00:18:22.520890 systemd-networkd[1484]: cilium_net: Gained IPv6LL Jul 11 00:18:22.533731 kernel: NET: Registered PF_ALG protocol family Jul 11 00:18:23.312977 systemd-networkd[1484]: cilium_vxlan: Gained IPv6LL Jul 11 00:18:23.396903 systemd-networkd[1484]: lxc_health: Link UP Jul 11 00:18:23.397305 systemd-networkd[1484]: lxc_health: Gained carrier Jul 11 00:18:23.493024 kubelet[2773]: E0711 00:18:23.492964 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:23.606362 kubelet[2773]: I0711 00:18:23.606150 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x285l" podStartSLOduration=17.163941404 podStartE2EDuration="34.606125992s" podCreationTimestamp="2025-07-11 00:17:49 +0000 UTC" firstStartedPulling="2025-07-11 00:17:54.472464192 +0000 UTC m=+10.107064242" lastFinishedPulling="2025-07-11 00:18:11.91464878 +0000 UTC m=+27.549248830" observedRunningTime="2025-07-11 00:18:20.839797551 +0000 UTC m=+36.474397601" watchObservedRunningTime="2025-07-11 00:18:23.606125992 +0000 UTC m=+39.240726042" Jul 11 00:18:23.619475 kubelet[2773]: E0711 00:18:23.619417 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:23.847781 systemd-networkd[1484]: lxc0830521f4f7b: Link UP Jul 11 00:18:23.869751 kernel: eth0: renamed from tmpec340 Jul 11 00:18:23.869926 kernel: eth0: renamed from tmp75a8b Jul 11 00:18:23.871224 systemd-networkd[1484]: lxc72ebbe6df03e: Link UP Jul 11 00:18:23.872097 systemd-networkd[1484]: lxc0830521f4f7b: Gained carrier Jul 11 00:18:23.877334 systemd-networkd[1484]: lxc72ebbe6df03e: Gained carrier Jul 11 00:18:24.465216 systemd-networkd[1484]: lxc_health: Gained IPv6LL Jul 11 00:18:24.621966 kubelet[2773]: E0711 00:18:24.621820 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:24.976969 systemd-networkd[1484]: lxc72ebbe6df03e: Gained IPv6LL Jul 11 00:18:25.489869 systemd-networkd[1484]: lxc0830521f4f7b: Gained IPv6LL Jul 11 00:18:28.405275 containerd[1596]: time="2025-07-11T00:18:28.405151655Z" level=info msg="connecting to shim ec3409ed8a8da2f48ac5a2bdf04a9aa8d422c0a5d68aff18787ba542a5573c06" address="unix:///run/containerd/s/16c2a813c47f4f92d0e6cf82c9b061f47b3c929ae328725eb8f35b9fe11468d6" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:18:28.407337 containerd[1596]: time="2025-07-11T00:18:28.407278208Z" level=info msg="connecting to shim 75a8b2ba9b924c83f267d9979170b0beb0bf2f82a6c525be4198923e5ef948c3" address="unix:///run/containerd/s/28101b6922ae57bd68408a6676e114b8849542142f45d5e449ea9e249d9b0196" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:18:28.442941 systemd[1]: Started cri-containerd-ec3409ed8a8da2f48ac5a2bdf04a9aa8d422c0a5d68aff18787ba542a5573c06.scope - libcontainer container ec3409ed8a8da2f48ac5a2bdf04a9aa8d422c0a5d68aff18787ba542a5573c06. Jul 11 00:18:28.450263 systemd[1]: Started cri-containerd-75a8b2ba9b924c83f267d9979170b0beb0bf2f82a6c525be4198923e5ef948c3.scope - libcontainer container 75a8b2ba9b924c83f267d9979170b0beb0bf2f82a6c525be4198923e5ef948c3. Jul 11 00:18:28.472215 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:18:28.479407 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:18:28.716658 containerd[1596]: time="2025-07-11T00:18:28.716430872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ls79k,Uid:308cd537-69f3-4181-b21d-9ff795367d29,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec3409ed8a8da2f48ac5a2bdf04a9aa8d422c0a5d68aff18787ba542a5573c06\"" Jul 11 00:18:28.717939 kubelet[2773]: E0711 00:18:28.717890 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:28.725573 containerd[1596]: time="2025-07-11T00:18:28.725517834Z" level=info msg="CreateContainer within sandbox \"ec3409ed8a8da2f48ac5a2bdf04a9aa8d422c0a5d68aff18787ba542a5573c06\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:18:28.875268 containerd[1596]: time="2025-07-11T00:18:28.875214516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nm6l7,Uid:2b95a2b9-f075-4dac-a0ae-da03c727bec1,Namespace:kube-system,Attempt:0,} returns sandbox id \"75a8b2ba9b924c83f267d9979170b0beb0bf2f82a6c525be4198923e5ef948c3\"" Jul 11 00:18:28.876435 kubelet[2773]: E0711 00:18:28.876375 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:28.878525 containerd[1596]: time="2025-07-11T00:18:28.878475688Z" level=info msg="CreateContainer within sandbox \"75a8b2ba9b924c83f267d9979170b0beb0bf2f82a6c525be4198923e5ef948c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:18:29.287504 containerd[1596]: time="2025-07-11T00:18:29.287402887Z" level=info msg="Container 4f7cd18afa8c34e3d557a413029933177fccc305f8335d0fa501dac91d81beb1: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:18:29.312679 containerd[1596]: time="2025-07-11T00:18:29.312577103Z" level=info msg="Container 56041f4c2481c4316932a5d29445702dfc9162dcdc1025cd9c5cea390376af2e: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:18:29.339290 containerd[1596]: time="2025-07-11T00:18:29.339139405Z" level=info msg="CreateContainer within sandbox \"ec3409ed8a8da2f48ac5a2bdf04a9aa8d422c0a5d68aff18787ba542a5573c06\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4f7cd18afa8c34e3d557a413029933177fccc305f8335d0fa501dac91d81beb1\"" Jul 11 00:18:29.341276 containerd[1596]: time="2025-07-11T00:18:29.341245636Z" level=info msg="CreateContainer within sandbox \"75a8b2ba9b924c83f267d9979170b0beb0bf2f82a6c525be4198923e5ef948c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"56041f4c2481c4316932a5d29445702dfc9162dcdc1025cd9c5cea390376af2e\"" Jul 11 00:18:29.345748 containerd[1596]: time="2025-07-11T00:18:29.345719812Z" level=info msg="StartContainer for \"56041f4c2481c4316932a5d29445702dfc9162dcdc1025cd9c5cea390376af2e\"" Jul 11 00:18:29.346051 containerd[1596]: time="2025-07-11T00:18:29.345719792Z" level=info msg="StartContainer for \"4f7cd18afa8c34e3d557a413029933177fccc305f8335d0fa501dac91d81beb1\"" Jul 11 00:18:29.347721 containerd[1596]: time="2025-07-11T00:18:29.347270382Z" level=info msg="connecting to shim 4f7cd18afa8c34e3d557a413029933177fccc305f8335d0fa501dac91d81beb1" address="unix:///run/containerd/s/16c2a813c47f4f92d0e6cf82c9b061f47b3c929ae328725eb8f35b9fe11468d6" protocol=ttrpc version=3 Jul 11 00:18:29.347721 containerd[1596]: time="2025-07-11T00:18:29.347596971Z" level=info msg="connecting to shim 56041f4c2481c4316932a5d29445702dfc9162dcdc1025cd9c5cea390376af2e" address="unix:///run/containerd/s/28101b6922ae57bd68408a6676e114b8849542142f45d5e449ea9e249d9b0196" protocol=ttrpc version=3 Jul 11 00:18:29.359325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1676666661.mount: Deactivated successfully. Jul 11 00:18:29.385048 systemd[1]: Started cri-containerd-4f7cd18afa8c34e3d557a413029933177fccc305f8335d0fa501dac91d81beb1.scope - libcontainer container 4f7cd18afa8c34e3d557a413029933177fccc305f8335d0fa501dac91d81beb1. Jul 11 00:18:29.387130 systemd[1]: Started cri-containerd-56041f4c2481c4316932a5d29445702dfc9162dcdc1025cd9c5cea390376af2e.scope - libcontainer container 56041f4c2481c4316932a5d29445702dfc9162dcdc1025cd9c5cea390376af2e. Jul 11 00:18:29.434197 containerd[1596]: time="2025-07-11T00:18:29.434127081Z" level=info msg="StartContainer for \"4f7cd18afa8c34e3d557a413029933177fccc305f8335d0fa501dac91d81beb1\" returns successfully" Jul 11 00:18:29.449609 containerd[1596]: time="2025-07-11T00:18:29.449472377Z" level=info msg="StartContainer for \"56041f4c2481c4316932a5d29445702dfc9162dcdc1025cd9c5cea390376af2e\" returns successfully" Jul 11 00:18:29.660258 kubelet[2773]: E0711 00:18:29.660005 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:29.660258 kubelet[2773]: E0711 00:18:29.660104 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:29.684328 kubelet[2773]: I0711 00:18:29.684246 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nm6l7" podStartSLOduration=36.684200223 podStartE2EDuration="36.684200223s" podCreationTimestamp="2025-07-11 00:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:18:29.683004939 +0000 UTC m=+45.317604999" watchObservedRunningTime="2025-07-11 00:18:29.684200223 +0000 UTC m=+45.318800273" Jul 11 00:18:29.713215 kubelet[2773]: I0711 00:18:29.713115 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ls79k" podStartSLOduration=36.713088569 podStartE2EDuration="36.713088569s" podCreationTimestamp="2025-07-11 00:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:18:29.711962769 +0000 UTC m=+45.346562819" watchObservedRunningTime="2025-07-11 00:18:29.713088569 +0000 UTC m=+45.347688619" Jul 11 00:18:30.658581 kubelet[2773]: E0711 00:18:30.658519 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:30.659198 kubelet[2773]: E0711 00:18:30.658623 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:31.660778 kubelet[2773]: E0711 00:18:31.660710 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:31.660778 kubelet[2773]: E0711 00:18:31.660715 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:33.100641 systemd[1]: Started sshd@9-10.0.0.23:22-10.0.0.1:48818.service - OpenSSH per-connection server daemon (10.0.0.1:48818). Jul 11 00:18:33.190672 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 48818 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:18:33.193048 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:33.199568 systemd-logind[1582]: New session 10 of user core. Jul 11 00:18:33.213014 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 00:18:33.515219 sshd[4094]: Connection closed by 10.0.0.1 port 48818 Jul 11 00:18:33.515552 sshd-session[4092]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:33.521467 systemd[1]: sshd@9-10.0.0.23:22-10.0.0.1:48818.service: Deactivated successfully. Jul 11 00:18:33.524428 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:18:33.525457 systemd-logind[1582]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:18:33.527608 systemd-logind[1582]: Removed session 10. Jul 11 00:18:38.535384 systemd[1]: Started sshd@10-10.0.0.23:22-10.0.0.1:48826.service - OpenSSH per-connection server daemon (10.0.0.1:48826). Jul 11 00:18:38.601595 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 48826 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:18:38.603527 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:38.609155 systemd-logind[1582]: New session 11 of user core. Jul 11 00:18:38.622854 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 00:18:38.813940 sshd[4117]: Connection closed by 10.0.0.1 port 48826 Jul 11 00:18:38.814322 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:38.819576 systemd[1]: sshd@10-10.0.0.23:22-10.0.0.1:48826.service: Deactivated successfully. Jul 11 00:18:38.821960 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:18:38.822799 systemd-logind[1582]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:18:38.824239 systemd-logind[1582]: Removed session 11. Jul 11 00:18:43.833684 systemd[1]: Started sshd@11-10.0.0.23:22-10.0.0.1:42560.service - OpenSSH per-connection server daemon (10.0.0.1:42560). Jul 11 00:18:43.933284 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 42560 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:18:43.935449 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:43.941656 systemd-logind[1582]: New session 12 of user core. Jul 11 00:18:43.951914 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 00:18:44.102205 sshd[4133]: Connection closed by 10.0.0.1 port 42560 Jul 11 00:18:44.102585 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:44.106496 systemd[1]: sshd@11-10.0.0.23:22-10.0.0.1:42560.service: Deactivated successfully. Jul 11 00:18:44.109213 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:18:44.112871 systemd-logind[1582]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:18:44.114282 systemd-logind[1582]: Removed session 12. Jul 11 00:18:49.122961 systemd[1]: Started sshd@12-10.0.0.23:22-10.0.0.1:42570.service - OpenSSH per-connection server daemon (10.0.0.1:42570). Jul 11 00:18:49.185643 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 42570 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:18:49.188273 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:49.196587 systemd-logind[1582]: New session 13 of user core. Jul 11 00:18:49.202935 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 00:18:49.397833 sshd[4151]: Connection closed by 10.0.0.1 port 42570 Jul 11 00:18:49.398076 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:49.402506 systemd[1]: sshd@12-10.0.0.23:22-10.0.0.1:42570.service: Deactivated successfully. Jul 11 00:18:49.404688 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:18:49.405939 systemd-logind[1582]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:18:49.407409 systemd-logind[1582]: Removed session 13. Jul 11 00:18:54.412290 systemd[1]: Started sshd@13-10.0.0.23:22-10.0.0.1:35130.service - OpenSSH per-connection server daemon (10.0.0.1:35130). Jul 11 00:18:54.474549 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 35130 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:18:54.477021 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:54.487227 systemd-logind[1582]: New session 14 of user core. Jul 11 00:18:54.497057 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 00:18:54.879579 sshd[4168]: Connection closed by 10.0.0.1 port 35130 Jul 11 00:18:54.880127 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:54.885788 systemd[1]: sshd@13-10.0.0.23:22-10.0.0.1:35130.service: Deactivated successfully. Jul 11 00:18:54.888129 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:18:54.889160 systemd-logind[1582]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:18:54.890906 systemd-logind[1582]: Removed session 14. Jul 11 00:18:59.683439 systemd[1]: Started sshd@14-10.0.0.23:22-10.0.0.1:38342.service - OpenSSH per-connection server daemon (10.0.0.1:38342). Jul 11 00:18:59.737318 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 38342 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:18:59.739016 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:59.744059 systemd-logind[1582]: New session 15 of user core. Jul 11 00:18:59.756925 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 00:18:59.877039 sshd[4187]: Connection closed by 10.0.0.1 port 38342 Jul 11 00:18:59.877403 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:59.881894 systemd[1]: sshd@14-10.0.0.23:22-10.0.0.1:38342.service: Deactivated successfully. Jul 11 00:18:59.883973 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:18:59.885306 systemd-logind[1582]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:18:59.887546 systemd-logind[1582]: Removed session 15. Jul 11 00:19:00.488079 kubelet[2773]: E0711 00:19:00.488023 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:04.891834 systemd[1]: Started sshd@15-10.0.0.23:22-10.0.0.1:38344.service - OpenSSH per-connection server daemon (10.0.0.1:38344). Jul 11 00:19:04.953474 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 38344 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:19:04.955472 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:04.960981 systemd-logind[1582]: New session 16 of user core. Jul 11 00:19:04.965870 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 00:19:05.100678 sshd[4204]: Connection closed by 10.0.0.1 port 38344 Jul 11 00:19:05.101204 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:05.107349 systemd[1]: sshd@15-10.0.0.23:22-10.0.0.1:38344.service: Deactivated successfully. Jul 11 00:19:05.109969 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:19:05.111434 systemd-logind[1582]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:19:05.112916 systemd-logind[1582]: Removed session 16. Jul 11 00:19:07.488936 kubelet[2773]: E0711 00:19:07.488872 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:08.923784 update_engine[1585]: I20250711 00:19:08.923637 1585 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 11 00:19:08.923784 update_engine[1585]: I20250711 00:19:08.923765 1585 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 11 00:19:08.924359 update_engine[1585]: I20250711 00:19:08.924099 1585 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 11 00:19:08.924899 update_engine[1585]: I20250711 00:19:08.924849 1585 omaha_request_params.cc:62] Current group set to beta Jul 11 00:19:08.926934 update_engine[1585]: I20250711 00:19:08.926887 1585 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 11 00:19:08.926934 update_engine[1585]: I20250711 00:19:08.926913 1585 update_attempter.cc:643] Scheduling an action processor start. Jul 11 00:19:08.927049 update_engine[1585]: I20250711 00:19:08.926941 1585 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 11 00:19:08.927049 update_engine[1585]: I20250711 00:19:08.926988 1585 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 11 00:19:08.927118 update_engine[1585]: I20250711 00:19:08.927089 1585 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 11 00:19:08.927118 update_engine[1585]: I20250711 00:19:08.927109 1585 omaha_request_action.cc:272] Request: Jul 11 00:19:08.927118 update_engine[1585]: Jul 11 00:19:08.927118 update_engine[1585]: Jul 11 00:19:08.927118 update_engine[1585]: Jul 11 00:19:08.927118 update_engine[1585]: Jul 11 00:19:08.927118 update_engine[1585]: Jul 11 00:19:08.927118 update_engine[1585]: Jul 11 00:19:08.927118 update_engine[1585]: Jul 11 00:19:08.927118 update_engine[1585]: Jul 11 00:19:08.927321 update_engine[1585]: I20250711 00:19:08.927119 1585 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 11 00:19:08.931299 locksmithd[1624]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 11 00:19:08.931848 update_engine[1585]: I20250711 00:19:08.931660 1585 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 11 00:19:08.932188 update_engine[1585]: I20250711 00:19:08.932129 1585 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 11 00:19:08.939930 update_engine[1585]: E20250711 00:19:08.939481 1585 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 11 00:19:08.939930 update_engine[1585]: I20250711 00:19:08.939560 1585 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 11 00:19:10.127158 systemd[1]: Started sshd@16-10.0.0.23:22-10.0.0.1:50116.service - OpenSSH per-connection server daemon (10.0.0.1:50116). Jul 11 00:19:10.187992 sshd[4219]: Accepted publickey for core from 10.0.0.1 port 50116 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:19:10.191146 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:10.198987 systemd-logind[1582]: New session 17 of user core. Jul 11 00:19:10.217079 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 00:19:10.383384 sshd[4221]: Connection closed by 10.0.0.1 port 50116 Jul 11 00:19:10.384300 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:10.395471 systemd[1]: sshd@16-10.0.0.23:22-10.0.0.1:50116.service: Deactivated successfully. Jul 11 00:19:10.398272 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:19:10.399718 systemd-logind[1582]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:19:10.405331 systemd[1]: Started sshd@17-10.0.0.23:22-10.0.0.1:50128.service - OpenSSH per-connection server daemon (10.0.0.1:50128). Jul 11 00:19:10.406319 systemd-logind[1582]: Removed session 17. Jul 11 00:19:10.465739 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 50128 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:19:10.467681 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:10.473231 systemd-logind[1582]: New session 18 of user core. Jul 11 00:19:10.484074 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 00:19:10.489110 kubelet[2773]: E0711 00:19:10.489002 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:10.662047 sshd[4237]: Connection closed by 10.0.0.1 port 50128 Jul 11 00:19:10.662829 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:10.677515 systemd[1]: sshd@17-10.0.0.23:22-10.0.0.1:50128.service: Deactivated successfully. Jul 11 00:19:10.680496 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:19:10.681493 systemd-logind[1582]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:19:10.685943 systemd-logind[1582]: Removed session 18. Jul 11 00:19:10.691805 systemd[1]: Started sshd@18-10.0.0.23:22-10.0.0.1:50142.service - OpenSSH per-connection server daemon (10.0.0.1:50142). Jul 11 00:19:10.756003 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 50142 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:19:10.757877 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:10.763922 systemd-logind[1582]: New session 19 of user core. Jul 11 00:19:10.772952 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 00:19:10.900399 sshd[4251]: Connection closed by 10.0.0.1 port 50142 Jul 11 00:19:10.900772 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:10.905953 systemd[1]: sshd@18-10.0.0.23:22-10.0.0.1:50142.service: Deactivated successfully. Jul 11 00:19:10.908962 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:19:10.910430 systemd-logind[1582]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:19:10.912456 systemd-logind[1582]: Removed session 19. Jul 11 00:19:15.918208 systemd[1]: Started sshd@19-10.0.0.23:22-10.0.0.1:50148.service - OpenSSH per-connection server daemon (10.0.0.1:50148). Jul 11 00:19:15.980582 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 50148 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:19:15.982485 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:15.987728 systemd-logind[1582]: New session 20 of user core. Jul 11 00:19:16.002000 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 00:19:16.144745 sshd[4267]: Connection closed by 10.0.0.1 port 50148 Jul 11 00:19:16.145095 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:16.149468 systemd[1]: sshd@19-10.0.0.23:22-10.0.0.1:50148.service: Deactivated successfully. Jul 11 00:19:16.151791 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:19:16.152727 systemd-logind[1582]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:19:16.154408 systemd-logind[1582]: Removed session 20. Jul 11 00:19:18.922681 update_engine[1585]: I20250711 00:19:18.922544 1585 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 11 00:19:18.923249 update_engine[1585]: I20250711 00:19:18.922914 1585 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 11 00:19:18.923284 update_engine[1585]: I20250711 00:19:18.923249 1585 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 11 00:19:18.933256 update_engine[1585]: E20250711 00:19:18.933146 1585 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 11 00:19:18.933426 update_engine[1585]: I20250711 00:19:18.933269 1585 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 11 00:19:20.489287 kubelet[2773]: E0711 00:19:20.489228 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:21.162867 systemd[1]: Started sshd@20-10.0.0.23:22-10.0.0.1:40852.service - OpenSSH per-connection server daemon (10.0.0.1:40852). Jul 11 00:19:21.227083 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 40852 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:19:21.229149 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:21.234736 systemd-logind[1582]: New session 21 of user core. Jul 11 00:19:21.246110 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 00:19:21.401750 sshd[4282]: Connection closed by 10.0.0.1 port 40852 Jul 11 00:19:21.402360 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:21.409266 systemd[1]: sshd@20-10.0.0.23:22-10.0.0.1:40852.service: Deactivated successfully. Jul 11 00:19:21.412144 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 00:19:21.413472 systemd-logind[1582]: Session 21 logged out. Waiting for processes to exit. Jul 11 00:19:21.415251 systemd-logind[1582]: Removed session 21. Jul 11 00:19:21.488485 kubelet[2773]: E0711 00:19:21.488262 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:26.423667 systemd[1]: Started sshd@21-10.0.0.23:22-10.0.0.1:40868.service - OpenSSH per-connection server daemon (10.0.0.1:40868). Jul 11 00:19:26.480943 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 40868 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:19:26.482888 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:26.488416 systemd-logind[1582]: New session 22 of user core. Jul 11 00:19:26.499852 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 00:19:26.627089 sshd[4300]: Connection closed by 10.0.0.1 port 40868 Jul 11 00:19:26.627687 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:26.642343 systemd[1]: sshd@21-10.0.0.23:22-10.0.0.1:40868.service: Deactivated successfully. Jul 11 00:19:26.644582 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 00:19:26.645880 systemd-logind[1582]: Session 22 logged out. Waiting for processes to exit. Jul 11 00:19:26.650079 systemd[1]: Started sshd@22-10.0.0.23:22-10.0.0.1:40878.service - OpenSSH per-connection server daemon (10.0.0.1:40878). Jul 11 00:19:26.651168 systemd-logind[1582]: Removed session 22. Jul 11 00:19:26.751819 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 40878 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:19:26.754765 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:26.765039 systemd-logind[1582]: New session 23 of user core. Jul 11 00:19:26.783010 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 00:19:27.270511 sshd[4315]: Connection closed by 10.0.0.1 port 40878 Jul 11 00:19:27.271119 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:27.290412 systemd[1]: sshd@22-10.0.0.23:22-10.0.0.1:40878.service: Deactivated successfully. Jul 11 00:19:27.293166 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 00:19:27.294225 systemd-logind[1582]: Session 23 logged out. Waiting for processes to exit. Jul 11 00:19:27.299426 systemd[1]: Started sshd@23-10.0.0.23:22-10.0.0.1:40886.service - OpenSSH per-connection server daemon (10.0.0.1:40886). Jul 11 00:19:27.300378 systemd-logind[1582]: Removed session 23. Jul 11 00:19:27.373317 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 40886 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:19:27.375535 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:27.383255 systemd-logind[1582]: New session 24 of user core. Jul 11 00:19:27.394083 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 00:19:28.920879 update_engine[1585]: I20250711 00:19:28.920784 1585 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 11 00:19:28.921406 update_engine[1585]: I20250711 00:19:28.921108 1585 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 11 00:19:28.921447 update_engine[1585]: I20250711 00:19:28.921427 1585 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 11 00:19:28.929899 update_engine[1585]: E20250711 00:19:28.929843 1585 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 11 00:19:28.929979 update_engine[1585]: I20250711 00:19:28.929908 1585 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 11 00:19:29.204480 sshd[4328]: Connection closed by 10.0.0.1 port 40886 Jul 11 00:19:29.205950 sshd-session[4326]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:29.216299 systemd[1]: sshd@23-10.0.0.23:22-10.0.0.1:40886.service: Deactivated successfully. Jul 11 00:19:29.218599 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 00:19:29.220763 systemd-logind[1582]: Session 24 logged out. Waiting for processes to exit. Jul 11 00:19:29.224287 systemd[1]: Started sshd@24-10.0.0.23:22-10.0.0.1:40888.service - OpenSSH per-connection server daemon (10.0.0.1:40888). Jul 11 00:19:29.225317 systemd-logind[1582]: Removed session 24. Jul 11 00:19:29.314609 sshd[4347]: Accepted publickey for core from 10.0.0.1 port 40888 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:19:29.317128 sshd-session[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:29.324476 systemd-logind[1582]: New session 25 of user core. Jul 11 00:19:29.343097 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 11 00:19:29.621541 sshd[4349]: Connection closed by 10.0.0.1 port 40888 Jul 11 00:19:29.622035 sshd-session[4347]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:29.643459 systemd[1]: sshd@24-10.0.0.23:22-10.0.0.1:40888.service: Deactivated successfully. Jul 11 00:19:29.646436 systemd[1]: session-25.scope: Deactivated successfully. Jul 11 00:19:29.647546 systemd-logind[1582]: Session 25 logged out. Waiting for processes to exit. Jul 11 00:19:29.651049 systemd-logind[1582]: Removed session 25. Jul 11 00:19:29.653539 systemd[1]: Started sshd@25-10.0.0.23:22-10.0.0.1:33936.service - OpenSSH per-connection server daemon (10.0.0.1:33936). Jul 11 00:19:29.716463 sshd[4361]: Accepted publickey for core from 10.0.0.1 port 33936 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:19:29.718268 sshd-session[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:29.724996 systemd-logind[1582]: New session 26 of user core. Jul 11 00:19:29.741034 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 11 00:19:29.864165 sshd[4363]: Connection closed by 10.0.0.1 port 33936 Jul 11 00:19:29.864526 sshd-session[4361]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:29.868551 systemd[1]: sshd@25-10.0.0.23:22-10.0.0.1:33936.service: Deactivated successfully. Jul 11 00:19:29.871365 systemd[1]: session-26.scope: Deactivated successfully. Jul 11 00:19:29.873818 systemd-logind[1582]: Session 26 logged out. Waiting for processes to exit. Jul 11 00:19:29.876210 systemd-logind[1582]: Removed session 26. Jul 11 00:19:34.884075 systemd[1]: Started sshd@26-10.0.0.23:22-10.0.0.1:33946.service - OpenSSH per-connection server daemon (10.0.0.1:33946). Jul 11 00:19:34.951897 sshd[4376]: Accepted publickey for core from 10.0.0.1 port 33946 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:19:34.954003 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:34.960433 systemd-logind[1582]: New session 27 of user core. Jul 11 00:19:34.974937 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 11 00:19:35.103051 sshd[4378]: Connection closed by 10.0.0.1 port 33946 Jul 11 00:19:35.103446 sshd-session[4376]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:35.108851 systemd[1]: sshd@26-10.0.0.23:22-10.0.0.1:33946.service: Deactivated successfully. Jul 11 00:19:35.111622 systemd[1]: session-27.scope: Deactivated successfully. Jul 11 00:19:35.112577 systemd-logind[1582]: Session 27 logged out. Waiting for processes to exit. Jul 11 00:19:35.114595 systemd-logind[1582]: Removed session 27. Jul 11 00:19:38.923733 update_engine[1585]: I20250711 00:19:38.923590 1585 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 11 00:19:38.924265 update_engine[1585]: I20250711 00:19:38.923974 1585 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 11 00:19:38.924290 update_engine[1585]: I20250711 00:19:38.924270 1585 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 11 00:19:38.930282 update_engine[1585]: E20250711 00:19:38.930235 1585 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 11 00:19:38.930356 update_engine[1585]: I20250711 00:19:38.930294 1585 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 11 00:19:38.930356 update_engine[1585]: I20250711 00:19:38.930306 1585 omaha_request_action.cc:617] Omaha request response: Jul 11 00:19:38.930444 update_engine[1585]: E20250711 00:19:38.930409 1585 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 11 00:19:38.930493 update_engine[1585]: I20250711 00:19:38.930462 1585 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 11 00:19:38.930493 update_engine[1585]: I20250711 00:19:38.930472 1585 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 11 00:19:38.930493 update_engine[1585]: I20250711 00:19:38.930483 1585 update_attempter.cc:306] Processing Done. Jul 11 00:19:38.930588 update_engine[1585]: E20250711 00:19:38.930505 1585 update_attempter.cc:619] Update failed. Jul 11 00:19:38.930588 update_engine[1585]: I20250711 00:19:38.930521 1585 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 11 00:19:38.930588 update_engine[1585]: I20250711 00:19:38.930528 1585 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 11 00:19:38.930588 update_engine[1585]: I20250711 00:19:38.930536 1585 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 11 00:19:38.930725 update_engine[1585]: I20250711 00:19:38.930624 1585 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 11 00:19:38.930725 update_engine[1585]: I20250711 00:19:38.930658 1585 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 11 00:19:38.930725 update_engine[1585]: I20250711 00:19:38.930667 1585 omaha_request_action.cc:272] Request: Jul 11 00:19:38.930725 update_engine[1585]: Jul 11 00:19:38.930725 update_engine[1585]: Jul 11 00:19:38.930725 update_engine[1585]: Jul 11 00:19:38.930725 update_engine[1585]: Jul 11 00:19:38.930725 update_engine[1585]: Jul 11 00:19:38.930725 update_engine[1585]: Jul 11 00:19:38.930725 update_engine[1585]: I20250711 00:19:38.930676 1585 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 11 00:19:38.931004 update_engine[1585]: I20250711 00:19:38.930886 1585 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 11 00:19:38.931038 locksmithd[1624]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 11 00:19:38.931361 update_engine[1585]: I20250711 00:19:38.931088 1585 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 11 00:19:38.936516 update_engine[1585]: E20250711 00:19:38.936475 1585 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 11 00:19:38.936582 update_engine[1585]: I20250711 00:19:38.936520 1585 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 11 00:19:38.936582 update_engine[1585]: I20250711 00:19:38.936528 1585 omaha_request_action.cc:617] Omaha request response: Jul 11 00:19:38.936582 update_engine[1585]: I20250711 00:19:38.936534 1585 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 11 00:19:38.936582 update_engine[1585]: I20250711 00:19:38.936540 1585 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 11 00:19:38.936582 update_engine[1585]: I20250711 00:19:38.936546 1585 update_attempter.cc:306] Processing Done. Jul 11 00:19:38.936582 update_engine[1585]: I20250711 00:19:38.936553 1585 update_attempter.cc:310] Error event sent. Jul 11 00:19:38.936582 update_engine[1585]: I20250711 00:19:38.936568 1585 update_check_scheduler.cc:74] Next update check in 41m34s Jul 11 00:19:38.936953 locksmithd[1624]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 11 00:19:40.120945 systemd[1]: Started sshd@27-10.0.0.23:22-10.0.0.1:40434.service - OpenSSH per-connection server daemon (10.0.0.1:40434). Jul 11 00:19:40.175465 sshd[4394]: Accepted publickey for core from 10.0.0.1 port 40434 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:19:40.177369 sshd-session[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:40.182850 systemd-logind[1582]: New session 28 of user core. Jul 11 00:19:40.191947 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 11 00:19:40.314362 sshd[4396]: Connection closed by 10.0.0.1 port 40434 Jul 11 00:19:40.314852 sshd-session[4394]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:40.321089 systemd[1]: sshd@27-10.0.0.23:22-10.0.0.1:40434.service: Deactivated successfully. Jul 11 00:19:40.323974 systemd[1]: session-28.scope: Deactivated successfully. Jul 11 00:19:40.325059 systemd-logind[1582]: Session 28 logged out. Waiting for processes to exit. Jul 11 00:19:40.326996 systemd-logind[1582]: Removed session 28. Jul 11 00:19:43.488471 kubelet[2773]: E0711 00:19:43.488398 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:45.332923 systemd[1]: Started sshd@28-10.0.0.23:22-10.0.0.1:40436.service - OpenSSH per-connection server daemon (10.0.0.1:40436). Jul 11 00:19:45.393473 sshd[4412]: Accepted publickey for core from 10.0.0.1 port 40436 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:19:45.395351 sshd-session[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:45.401313 systemd-logind[1582]: New session 29 of user core. Jul 11 00:19:45.410867 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 11 00:19:45.616423 sshd[4414]: Connection closed by 10.0.0.1 port 40436 Jul 11 00:19:45.616623 sshd-session[4412]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:45.622414 systemd[1]: sshd@28-10.0.0.23:22-10.0.0.1:40436.service: Deactivated successfully. Jul 11 00:19:45.624993 systemd[1]: session-29.scope: Deactivated successfully. Jul 11 00:19:45.625895 systemd-logind[1582]: Session 29 logged out. Waiting for processes to exit. Jul 11 00:19:45.627831 systemd-logind[1582]: Removed session 29. Jul 11 00:19:50.491120 kubelet[2773]: E0711 00:19:50.490782 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:50.644964 systemd[1]: Started sshd@29-10.0.0.23:22-10.0.0.1:52254.service - OpenSSH per-connection server daemon (10.0.0.1:52254). Jul 11 00:19:50.711388 sshd[4427]: Accepted publickey for core from 10.0.0.1 port 52254 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:19:50.713689 sshd-session[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:50.720023 systemd-logind[1582]: New session 30 of user core. Jul 11 00:19:50.730083 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 11 00:19:50.860536 sshd[4429]: Connection closed by 10.0.0.1 port 52254 Jul 11 00:19:50.861306 sshd-session[4427]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:50.871979 systemd[1]: sshd@29-10.0.0.23:22-10.0.0.1:52254.service: Deactivated successfully. Jul 11 00:19:50.874582 systemd[1]: session-30.scope: Deactivated successfully. Jul 11 00:19:50.875692 systemd-logind[1582]: Session 30 logged out. Waiting for processes to exit. Jul 11 00:19:50.878858 systemd[1]: Started sshd@30-10.0.0.23:22-10.0.0.1:52262.service - OpenSSH per-connection server daemon (10.0.0.1:52262). Jul 11 00:19:50.879634 systemd-logind[1582]: Removed session 30. Jul 11 00:19:50.937407 sshd[4442]: Accepted publickey for core from 10.0.0.1 port 52262 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:19:50.938926 sshd-session[4442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:50.945466 systemd-logind[1582]: New session 31 of user core. Jul 11 00:19:50.954880 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 11 00:19:52.376779 containerd[1596]: time="2025-07-11T00:19:52.376525453Z" level=info msg="StopContainer for \"1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895\" with timeout 30 (s)" Jul 11 00:19:52.392106 containerd[1596]: time="2025-07-11T00:19:52.392041299Z" level=info msg="Stop container \"1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895\" with signal terminated" Jul 11 00:19:52.411524 systemd[1]: cri-containerd-1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895.scope: Deactivated successfully. Jul 11 00:19:52.417258 containerd[1596]: time="2025-07-11T00:19:52.417198746Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895\" id:\"1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895\" pid:3379 exited_at:{seconds:1752193192 nanos:415104173}" Jul 11 00:19:52.417381 containerd[1596]: time="2025-07-11T00:19:52.417289828Z" level=info msg="received exit event container_id:\"1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895\" id:\"1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895\" pid:3379 exited_at:{seconds:1752193192 nanos:415104173}" Jul 11 00:19:52.436408 containerd[1596]: time="2025-07-11T00:19:52.436358095Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:19:52.446985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895-rootfs.mount: Deactivated successfully. Jul 11 00:19:52.447664 containerd[1596]: time="2025-07-11T00:19:52.447272886Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236\" id:\"4153eefb8d10c15d92d85a447b0ad00ad278812aa38b790f2b826ecbd0bf65e3\" pid:4472 exited_at:{seconds:1752193192 nanos:446571589}" Jul 11 00:19:52.451291 containerd[1596]: time="2025-07-11T00:19:52.451064599Z" level=info msg="StopContainer for \"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236\" with timeout 2 (s)" Jul 11 00:19:52.451528 containerd[1596]: time="2025-07-11T00:19:52.451490275Z" level=info msg="Stop container \"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236\" with signal terminated" Jul 11 00:19:52.461111 systemd-networkd[1484]: lxc_health: Link DOWN Jul 11 00:19:52.461129 systemd-networkd[1484]: lxc_health: Lost carrier Jul 11 00:19:52.465353 containerd[1596]: time="2025-07-11T00:19:52.465296836Z" level=info msg="StopContainer for \"1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895\" returns successfully" Jul 11 00:19:52.466581 containerd[1596]: time="2025-07-11T00:19:52.466549626Z" level=info msg="StopPodSandbox for \"b056e1e4d173f7ba42ddea423da83dddc8a202987d298d4771b7015cd6886cf2\"" Jul 11 00:19:52.466663 containerd[1596]: time="2025-07-11T00:19:52.466636811Z" level=info msg="Container to stop \"1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:19:52.482400 systemd[1]: cri-containerd-b056e1e4d173f7ba42ddea423da83dddc8a202987d298d4771b7015cd6886cf2.scope: Deactivated successfully. Jul 11 00:19:52.484900 containerd[1596]: time="2025-07-11T00:19:52.484855891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b056e1e4d173f7ba42ddea423da83dddc8a202987d298d4771b7015cd6886cf2\" id:\"b056e1e4d173f7ba42ddea423da83dddc8a202987d298d4771b7015cd6886cf2\" pid:2976 exit_status:137 exited_at:{seconds:1752193192 nanos:484506280}" Jul 11 00:19:52.491223 systemd[1]: cri-containerd-4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236.scope: Deactivated successfully. Jul 11 00:19:52.492142 systemd[1]: cri-containerd-4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236.scope: Consumed 8.343s CPU time, 127.3M memory peak, 684K read from disk, 13.3M written to disk. Jul 11 00:19:52.494713 containerd[1596]: time="2025-07-11T00:19:52.494615617Z" level=info msg="received exit event container_id:\"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236\" id:\"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236\" pid:3413 exited_at:{seconds:1752193192 nanos:493540603}" Jul 11 00:19:52.525004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236-rootfs.mount: Deactivated successfully. Jul 11 00:19:52.528987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b056e1e4d173f7ba42ddea423da83dddc8a202987d298d4771b7015cd6886cf2-rootfs.mount: Deactivated successfully. Jul 11 00:19:52.623445 containerd[1596]: time="2025-07-11T00:19:52.623387570Z" level=info msg="TearDown network for sandbox \"b056e1e4d173f7ba42ddea423da83dddc8a202987d298d4771b7015cd6886cf2\" successfully" Jul 11 00:19:52.623445 containerd[1596]: time="2025-07-11T00:19:52.623433947Z" level=info msg="StopPodSandbox for \"b056e1e4d173f7ba42ddea423da83dddc8a202987d298d4771b7015cd6886cf2\" returns successfully" Jul 11 00:19:52.623841 containerd[1596]: time="2025-07-11T00:19:52.623806763Z" level=info msg="shim disconnected" id=b056e1e4d173f7ba42ddea423da83dddc8a202987d298d4771b7015cd6886cf2 namespace=k8s.io Jul 11 00:19:52.623841 containerd[1596]: time="2025-07-11T00:19:52.623836870Z" level=warning msg="cleaning up after shim disconnected" id=b056e1e4d173f7ba42ddea423da83dddc8a202987d298d4771b7015cd6886cf2 namespace=k8s.io Jul 11 00:19:52.624643 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b056e1e4d173f7ba42ddea423da83dddc8a202987d298d4771b7015cd6886cf2-shm.mount: Deactivated successfully. Jul 11 00:19:52.663532 containerd[1596]: time="2025-07-11T00:19:52.623847219Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:19:52.663683 containerd[1596]: time="2025-07-11T00:19:52.632504979Z" level=info msg="received exit event sandbox_id:\"b056e1e4d173f7ba42ddea423da83dddc8a202987d298d4771b7015cd6886cf2\" exit_status:137 exited_at:{seconds:1752193192 nanos:484506280}" Jul 11 00:19:52.698546 containerd[1596]: time="2025-07-11T00:19:52.698080857Z" level=info msg="StopContainer for \"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236\" returns successfully" Jul 11 00:19:52.702178 containerd[1596]: time="2025-07-11T00:19:52.702142082Z" level=info msg="StopPodSandbox for \"a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f\"" Jul 11 00:19:52.702326 containerd[1596]: time="2025-07-11T00:19:52.702258352Z" level=info msg="Container to stop \"284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:19:52.702326 containerd[1596]: time="2025-07-11T00:19:52.702303787Z" level=info msg="Container to stop \"c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:19:52.702326 containerd[1596]: time="2025-07-11T00:19:52.702319728Z" level=info msg="Container to stop \"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:19:52.702432 containerd[1596]: time="2025-07-11T00:19:52.702332051Z" level=info msg="Container to stop \"603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:19:52.702432 containerd[1596]: time="2025-07-11T00:19:52.702343633Z" level=info msg="Container to stop \"6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:19:52.707779 containerd[1596]: time="2025-07-11T00:19:52.707446948Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236\" id:\"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236\" pid:3413 exited_at:{seconds:1752193192 nanos:493540603}" Jul 11 00:19:52.715598 systemd[1]: cri-containerd-a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f.scope: Deactivated successfully. Jul 11 00:19:52.717009 containerd[1596]: time="2025-07-11T00:19:52.716967731Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f\" id:\"a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f\" pid:2892 exit_status:137 exited_at:{seconds:1752193192 nanos:716480459}" Jul 11 00:19:52.750315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f-rootfs.mount: Deactivated successfully. Jul 11 00:19:52.826773 kubelet[2773]: I0711 00:19:52.826689 2773 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gz96\" (UniqueName: \"kubernetes.io/projected/5e30c967-5f21-467d-aa3c-66bac9e1b9d8-kube-api-access-8gz96\") pod \"5e30c967-5f21-467d-aa3c-66bac9e1b9d8\" (UID: \"5e30c967-5f21-467d-aa3c-66bac9e1b9d8\") " Jul 11 00:19:52.826773 kubelet[2773]: I0711 00:19:52.826779 2773 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e30c967-5f21-467d-aa3c-66bac9e1b9d8-cilium-config-path\") pod \"5e30c967-5f21-467d-aa3c-66bac9e1b9d8\" (UID: \"5e30c967-5f21-467d-aa3c-66bac9e1b9d8\") " Jul 11 00:19:52.830115 kubelet[2773]: I0711 00:19:52.830071 2773 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e30c967-5f21-467d-aa3c-66bac9e1b9d8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5e30c967-5f21-467d-aa3c-66bac9e1b9d8" (UID: "5e30c967-5f21-467d-aa3c-66bac9e1b9d8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 00:19:52.848109 kubelet[2773]: I0711 00:19:52.848013 2773 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e30c967-5f21-467d-aa3c-66bac9e1b9d8-kube-api-access-8gz96" (OuterVolumeSpecName: "kube-api-access-8gz96") pod "5e30c967-5f21-467d-aa3c-66bac9e1b9d8" (UID: "5e30c967-5f21-467d-aa3c-66bac9e1b9d8"). InnerVolumeSpecName "kube-api-access-8gz96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:19:52.858831 containerd[1596]: time="2025-07-11T00:19:52.858768965Z" level=info msg="shim disconnected" id=a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f namespace=k8s.io Jul 11 00:19:52.858831 containerd[1596]: time="2025-07-11T00:19:52.858813439Z" level=warning msg="cleaning up after shim disconnected" id=a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f namespace=k8s.io Jul 11 00:19:52.858831 containerd[1596]: time="2025-07-11T00:19:52.858824470Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:19:52.874475 containerd[1596]: time="2025-07-11T00:19:52.874358520Z" level=info msg="received exit event sandbox_id:\"a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f\" exit_status:137 exited_at:{seconds:1752193192 nanos:716480459}" Jul 11 00:19:52.874763 containerd[1596]: time="2025-07-11T00:19:52.874662626Z" level=info msg="TearDown network for sandbox \"a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f\" successfully" Jul 11 00:19:52.874763 containerd[1596]: time="2025-07-11T00:19:52.874728961Z" level=info msg="StopPodSandbox for \"a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f\" returns successfully" Jul 11 00:19:52.882402 kubelet[2773]: I0711 00:19:52.882337 2773 scope.go:117] "RemoveContainer" containerID="1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895" Jul 11 00:19:52.887457 containerd[1596]: time="2025-07-11T00:19:52.887277342Z" level=info msg="RemoveContainer for \"1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895\"" Jul 11 00:19:52.891717 systemd[1]: Removed slice kubepods-besteffort-pod5e30c967_5f21_467d_aa3c_66bac9e1b9d8.slice - libcontainer container kubepods-besteffort-pod5e30c967_5f21_467d_aa3c_66bac9e1b9d8.slice. Jul 11 00:19:52.893689 containerd[1596]: time="2025-07-11T00:19:52.893637997Z" level=info msg="RemoveContainer for \"1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895\" returns successfully" Jul 11 00:19:52.893947 kubelet[2773]: I0711 00:19:52.893904 2773 scope.go:117] "RemoveContainer" containerID="1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895" Jul 11 00:19:52.894219 containerd[1596]: time="2025-07-11T00:19:52.894171255Z" level=error msg="ContainerStatus for \"1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895\": not found" Jul 11 00:19:52.901687 kubelet[2773]: E0711 00:19:52.901597 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895\": not found" containerID="1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895" Jul 11 00:19:52.901916 kubelet[2773]: I0711 00:19:52.901684 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895"} err="failed to get container status \"1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895\": rpc error: code = NotFound desc = an error occurred when try to find container \"1262a5efceeab6deb8fe2b8673ef91b5c5a7b6f0a9aeae93cc30e5bcddec5895\": not found" Jul 11 00:19:52.901916 kubelet[2773]: I0711 00:19:52.901828 2773 scope.go:117] "RemoveContainer" containerID="4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236" Jul 11 00:19:52.905301 containerd[1596]: time="2025-07-11T00:19:52.904343882Z" level=info msg="RemoveContainer for \"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236\"" Jul 11 00:19:52.927961 kubelet[2773]: I0711 00:19:52.927759 2773 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gz96\" (UniqueName: \"kubernetes.io/projected/5e30c967-5f21-467d-aa3c-66bac9e1b9d8-kube-api-access-8gz96\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:52.927961 kubelet[2773]: I0711 00:19:52.927809 2773 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e30c967-5f21-467d-aa3c-66bac9e1b9d8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:53.011880 containerd[1596]: time="2025-07-11T00:19:53.011809783Z" level=info msg="RemoveContainer for \"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236\" returns successfully" Jul 11 00:19:53.012115 kubelet[2773]: I0711 00:19:53.012079 2773 scope.go:117] "RemoveContainer" containerID="c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569" Jul 11 00:19:53.014305 containerd[1596]: time="2025-07-11T00:19:53.014263866Z" level=info msg="RemoveContainer for \"c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569\"" Jul 11 00:19:53.028454 kubelet[2773]: I0711 00:19:53.028359 2773 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-hostproc\") pod \"5a380c84-61e2-41b6-b55a-1e950e98d990\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " Jul 11 00:19:53.028454 kubelet[2773]: I0711 00:19:53.028422 2773 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bjg8\" (UniqueName: \"kubernetes.io/projected/5a380c84-61e2-41b6-b55a-1e950e98d990-kube-api-access-5bjg8\") pod \"5a380c84-61e2-41b6-b55a-1e950e98d990\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " Jul 11 00:19:53.028454 kubelet[2773]: I0711 00:19:53.028447 2773 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-xtables-lock\") pod \"5a380c84-61e2-41b6-b55a-1e950e98d990\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " Jul 11 00:19:53.029526 kubelet[2773]: I0711 00:19:53.028469 2773 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-etc-cni-netd\") pod \"5a380c84-61e2-41b6-b55a-1e950e98d990\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " Jul 11 00:19:53.029526 kubelet[2773]: I0711 00:19:53.028439 2773 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-hostproc" (OuterVolumeSpecName: "hostproc") pod "5a380c84-61e2-41b6-b55a-1e950e98d990" (UID: "5a380c84-61e2-41b6-b55a-1e950e98d990"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:19:53.029526 kubelet[2773]: I0711 00:19:53.028491 2773 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-cilium-run\") pod \"5a380c84-61e2-41b6-b55a-1e950e98d990\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " Jul 11 00:19:53.029526 kubelet[2773]: I0711 00:19:53.028466 2773 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5a380c84-61e2-41b6-b55a-1e950e98d990" (UID: "5a380c84-61e2-41b6-b55a-1e950e98d990"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:19:53.029526 kubelet[2773]: I0711 00:19:53.028508 2773 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-host-proc-sys-net\") pod \"5a380c84-61e2-41b6-b55a-1e950e98d990\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " Jul 11 00:19:53.029730 kubelet[2773]: I0711 00:19:53.028518 2773 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5a380c84-61e2-41b6-b55a-1e950e98d990" (UID: "5a380c84-61e2-41b6-b55a-1e950e98d990"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:19:53.029730 kubelet[2773]: I0711 00:19:53.028516 2773 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5a380c84-61e2-41b6-b55a-1e950e98d990" (UID: "5a380c84-61e2-41b6-b55a-1e950e98d990"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:19:53.029730 kubelet[2773]: I0711 00:19:53.028536 2773 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a380c84-61e2-41b6-b55a-1e950e98d990-cilium-config-path\") pod \"5a380c84-61e2-41b6-b55a-1e950e98d990\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " Jul 11 00:19:53.029730 kubelet[2773]: I0711 00:19:53.028544 2773 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5a380c84-61e2-41b6-b55a-1e950e98d990" (UID: "5a380c84-61e2-41b6-b55a-1e950e98d990"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:19:53.029730 kubelet[2773]: I0711 00:19:53.028562 2773 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5a380c84-61e2-41b6-b55a-1e950e98d990-hubble-tls\") pod \"5a380c84-61e2-41b6-b55a-1e950e98d990\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " Jul 11 00:19:53.029906 kubelet[2773]: I0711 00:19:53.028582 2773 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-lib-modules\") pod \"5a380c84-61e2-41b6-b55a-1e950e98d990\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " Jul 11 00:19:53.029906 kubelet[2773]: I0711 00:19:53.028601 2773 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-cilium-cgroup\") pod \"5a380c84-61e2-41b6-b55a-1e950e98d990\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " Jul 11 00:19:53.029906 kubelet[2773]: I0711 00:19:53.028623 2773 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-host-proc-sys-kernel\") pod \"5a380c84-61e2-41b6-b55a-1e950e98d990\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " Jul 11 00:19:53.029906 kubelet[2773]: I0711 00:19:53.028640 2773 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-cni-path\") pod \"5a380c84-61e2-41b6-b55a-1e950e98d990\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " Jul 11 00:19:53.029906 kubelet[2773]: I0711 00:19:53.028668 2773 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5a380c84-61e2-41b6-b55a-1e950e98d990-clustermesh-secrets\") pod \"5a380c84-61e2-41b6-b55a-1e950e98d990\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " Jul 11 00:19:53.029906 kubelet[2773]: I0711 00:19:53.028682 2773 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5a380c84-61e2-41b6-b55a-1e950e98d990" (UID: "5a380c84-61e2-41b6-b55a-1e950e98d990"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:19:53.030124 kubelet[2773]: I0711 00:19:53.028687 2773 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-bpf-maps\") pod \"5a380c84-61e2-41b6-b55a-1e950e98d990\" (UID: \"5a380c84-61e2-41b6-b55a-1e950e98d990\") " Jul 11 00:19:53.030124 kubelet[2773]: I0711 00:19:53.028738 2773 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5a380c84-61e2-41b6-b55a-1e950e98d990" (UID: "5a380c84-61e2-41b6-b55a-1e950e98d990"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:19:53.030124 kubelet[2773]: I0711 00:19:53.028775 2773 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:53.030124 kubelet[2773]: I0711 00:19:53.028789 2773 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:53.030124 kubelet[2773]: I0711 00:19:53.028801 2773 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:53.030124 kubelet[2773]: I0711 00:19:53.028817 2773 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:53.030124 kubelet[2773]: I0711 00:19:53.028829 2773 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:53.030348 kubelet[2773]: I0711 00:19:53.028840 2773 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:53.030348 kubelet[2773]: I0711 00:19:53.028858 2773 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:53.030348 kubelet[2773]: I0711 00:19:53.028884 2773 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5a380c84-61e2-41b6-b55a-1e950e98d990" (UID: "5a380c84-61e2-41b6-b55a-1e950e98d990"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:19:53.030348 kubelet[2773]: I0711 00:19:53.028908 2773 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5a380c84-61e2-41b6-b55a-1e950e98d990" (UID: "5a380c84-61e2-41b6-b55a-1e950e98d990"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:19:53.030348 kubelet[2773]: I0711 00:19:53.028935 2773 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-cni-path" (OuterVolumeSpecName: "cni-path") pod "5a380c84-61e2-41b6-b55a-1e950e98d990" (UID: "5a380c84-61e2-41b6-b55a-1e950e98d990"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:19:53.033357 kubelet[2773]: I0711 00:19:53.033169 2773 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a380c84-61e2-41b6-b55a-1e950e98d990-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5a380c84-61e2-41b6-b55a-1e950e98d990" (UID: "5a380c84-61e2-41b6-b55a-1e950e98d990"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 11 00:19:53.033357 kubelet[2773]: I0711 00:19:53.033319 2773 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a380c84-61e2-41b6-b55a-1e950e98d990-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5a380c84-61e2-41b6-b55a-1e950e98d990" (UID: "5a380c84-61e2-41b6-b55a-1e950e98d990"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:19:53.033629 kubelet[2773]: I0711 00:19:53.033560 2773 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a380c84-61e2-41b6-b55a-1e950e98d990-kube-api-access-5bjg8" (OuterVolumeSpecName: "kube-api-access-5bjg8") pod "5a380c84-61e2-41b6-b55a-1e950e98d990" (UID: "5a380c84-61e2-41b6-b55a-1e950e98d990"). InnerVolumeSpecName "kube-api-access-5bjg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:19:53.034349 kubelet[2773]: I0711 00:19:53.034309 2773 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a380c84-61e2-41b6-b55a-1e950e98d990-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5a380c84-61e2-41b6-b55a-1e950e98d990" (UID: "5a380c84-61e2-41b6-b55a-1e950e98d990"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 00:19:53.046625 containerd[1596]: time="2025-07-11T00:19:53.046548294Z" level=info msg="RemoveContainer for \"c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569\" returns successfully" Jul 11 00:19:53.046975 kubelet[2773]: I0711 00:19:53.046913 2773 scope.go:117] "RemoveContainer" containerID="6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7" Jul 11 00:19:53.050210 containerd[1596]: time="2025-07-11T00:19:53.050071029Z" level=info msg="RemoveContainer for \"6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7\"" Jul 11 00:19:53.069637 containerd[1596]: time="2025-07-11T00:19:53.069458247Z" level=info msg="RemoveContainer for \"6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7\" returns successfully" Jul 11 00:19:53.069969 kubelet[2773]: I0711 00:19:53.069917 2773 scope.go:117] "RemoveContainer" containerID="603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9" Jul 11 00:19:53.074559 containerd[1596]: time="2025-07-11T00:19:53.073844265Z" level=info msg="RemoveContainer for \"603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9\"" Jul 11 00:19:53.080140 containerd[1596]: time="2025-07-11T00:19:53.080082858Z" level=info msg="RemoveContainer for \"603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9\" returns successfully" Jul 11 00:19:53.080629 kubelet[2773]: I0711 00:19:53.080588 2773 scope.go:117] "RemoveContainer" containerID="284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def" Jul 11 00:19:53.082806 containerd[1596]: time="2025-07-11T00:19:53.082767998Z" level=info msg="RemoveContainer for \"284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def\"" Jul 11 00:19:53.087673 containerd[1596]: time="2025-07-11T00:19:53.087614868Z" level=info msg="RemoveContainer for \"284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def\" returns successfully" Jul 11 00:19:53.088191 kubelet[2773]: I0711 00:19:53.087951 2773 scope.go:117] "RemoveContainer" containerID="4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236" Jul 11 00:19:53.088433 containerd[1596]: time="2025-07-11T00:19:53.088382951Z" level=error msg="ContainerStatus for \"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236\": not found" Jul 11 00:19:53.088566 kubelet[2773]: E0711 00:19:53.088535 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236\": not found" containerID="4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236" Jul 11 00:19:53.088624 kubelet[2773]: I0711 00:19:53.088575 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236"} err="failed to get container status \"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236\": rpc error: code = NotFound desc = an error occurred when try to find container \"4df78f45f21f7d6bdb9e820bc0f6b508a156afa184cb044d6a681d0a28be3236\": not found" Jul 11 00:19:53.088624 kubelet[2773]: I0711 00:19:53.088609 2773 scope.go:117] "RemoveContainer" containerID="c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569" Jul 11 00:19:53.088977 containerd[1596]: time="2025-07-11T00:19:53.088869582Z" level=error msg="ContainerStatus for \"c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569\": not found" Jul 11 00:19:53.089206 kubelet[2773]: E0711 00:19:53.089064 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569\": not found" containerID="c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569" Jul 11 00:19:53.089206 kubelet[2773]: I0711 00:19:53.089089 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569"} err="failed to get container status \"c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569\": rpc error: code = NotFound desc = an error occurred when try to find container \"c57e25666f28a52281da77b4759b97d45a29a01ac258523228fb21087d27b569\": not found" Jul 11 00:19:53.089206 kubelet[2773]: I0711 00:19:53.089117 2773 scope.go:117] "RemoveContainer" containerID="6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7" Jul 11 00:19:53.089452 containerd[1596]: time="2025-07-11T00:19:53.089318420Z" level=error msg="ContainerStatus for \"6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7\": not found" Jul 11 00:19:53.089565 kubelet[2773]: E0711 00:19:53.089529 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7\": not found" containerID="6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7" Jul 11 00:19:53.089627 kubelet[2773]: I0711 00:19:53.089586 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7"} err="failed to get container status \"6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"6211cd8a7ffcacf4f3453b707182e30de7d94122b5d2c75db08b1605101024d7\": not found" Jul 11 00:19:53.089627 kubelet[2773]: I0711 00:19:53.089623 2773 scope.go:117] "RemoveContainer" containerID="603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9" Jul 11 00:19:53.089866 containerd[1596]: time="2025-07-11T00:19:53.089821162Z" level=error msg="ContainerStatus for \"603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9\": not found" Jul 11 00:19:53.090133 kubelet[2773]: E0711 00:19:53.090094 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9\": not found" containerID="603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9" Jul 11 00:19:53.090133 kubelet[2773]: I0711 00:19:53.090128 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9"} err="failed to get container status \"603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9\": rpc error: code = NotFound desc = an error occurred when try to find container \"603e507c77a92f4902429f99384faed071674a927e29193b12c7cfa3faeb0fb9\": not found" Jul 11 00:19:53.090278 kubelet[2773]: I0711 00:19:53.090154 2773 scope.go:117] "RemoveContainer" containerID="284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def" Jul 11 00:19:53.090396 containerd[1596]: time="2025-07-11T00:19:53.090353138Z" level=error msg="ContainerStatus for \"284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def\": not found" Jul 11 00:19:53.090528 kubelet[2773]: E0711 00:19:53.090501 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def\": not found" containerID="284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def" Jul 11 00:19:53.090575 kubelet[2773]: I0711 00:19:53.090528 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def"} err="failed to get container status \"284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def\": rpc error: code = NotFound desc = an error occurred when try to find container \"284c53d4bbc20c95f949cba5a21cc4a770552c0c47e57d81553dd76b8be19def\": not found" Jul 11 00:19:53.129961 kubelet[2773]: I0711 00:19:53.129895 2773 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a380c84-61e2-41b6-b55a-1e950e98d990-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:53.129961 kubelet[2773]: I0711 00:19:53.129953 2773 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5a380c84-61e2-41b6-b55a-1e950e98d990-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:53.129961 kubelet[2773]: I0711 00:19:53.129967 2773 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:53.129961 kubelet[2773]: I0711 00:19:53.129978 2773 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:53.129961 kubelet[2773]: I0711 00:19:53.129990 2773 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5a380c84-61e2-41b6-b55a-1e950e98d990-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:53.129961 kubelet[2773]: I0711 00:19:53.130000 2773 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5a380c84-61e2-41b6-b55a-1e950e98d990-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:53.130369 kubelet[2773]: I0711 00:19:53.130010 2773 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bjg8\" (UniqueName: \"kubernetes.io/projected/5a380c84-61e2-41b6-b55a-1e950e98d990-kube-api-access-5bjg8\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:53.448265 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a6384152e631daf94c621daf9aa4b71648f2af1d98147eaac5e75f3d1233746f-shm.mount: Deactivated successfully. Jul 11 00:19:53.448427 systemd[1]: var-lib-kubelet-pods-5e30c967\x2d5f21\x2d467d\x2daa3c\x2d66bac9e1b9d8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8gz96.mount: Deactivated successfully. Jul 11 00:19:53.448522 systemd[1]: var-lib-kubelet-pods-5a380c84\x2d61e2\x2d41b6\x2db55a\x2d1e950e98d990-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5bjg8.mount: Deactivated successfully. Jul 11 00:19:53.448627 systemd[1]: var-lib-kubelet-pods-5a380c84\x2d61e2\x2d41b6\x2db55a\x2d1e950e98d990-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 11 00:19:53.448739 systemd[1]: var-lib-kubelet-pods-5a380c84\x2d61e2\x2d41b6\x2db55a\x2d1e950e98d990-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 11 00:19:53.905224 systemd[1]: Removed slice kubepods-burstable-pod5a380c84_61e2_41b6_b55a_1e950e98d990.slice - libcontainer container kubepods-burstable-pod5a380c84_61e2_41b6_b55a_1e950e98d990.slice. Jul 11 00:19:53.905360 systemd[1]: kubepods-burstable-pod5a380c84_61e2_41b6_b55a_1e950e98d990.slice: Consumed 8.478s CPU time, 127.7M memory peak, 692K read from disk, 13.3M written to disk. Jul 11 00:19:54.320525 sshd[4444]: Connection closed by 10.0.0.1 port 52262 Jul 11 00:19:54.321127 sshd-session[4442]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:54.334671 systemd[1]: sshd@30-10.0.0.23:22-10.0.0.1:52262.service: Deactivated successfully. Jul 11 00:19:54.337632 systemd[1]: session-31.scope: Deactivated successfully. Jul 11 00:19:54.338763 systemd-logind[1582]: Session 31 logged out. Waiting for processes to exit. Jul 11 00:19:54.342319 systemd-logind[1582]: Removed session 31. Jul 11 00:19:54.344250 systemd[1]: Started sshd@31-10.0.0.23:22-10.0.0.1:52274.service - OpenSSH per-connection server daemon (10.0.0.1:52274). Jul 11 00:19:54.403632 sshd[4602]: Accepted publickey for core from 10.0.0.1 port 52274 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:19:54.405748 sshd-session[4602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:54.412847 systemd-logind[1582]: New session 32 of user core. Jul 11 00:19:54.423157 systemd[1]: Started session-32.scope - Session 32 of User core. Jul 11 00:19:54.492403 kubelet[2773]: I0711 00:19:54.492329 2773 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a380c84-61e2-41b6-b55a-1e950e98d990" path="/var/lib/kubelet/pods/5a380c84-61e2-41b6-b55a-1e950e98d990/volumes" Jul 11 00:19:54.493426 kubelet[2773]: I0711 00:19:54.493391 2773 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e30c967-5f21-467d-aa3c-66bac9e1b9d8" path="/var/lib/kubelet/pods/5e30c967-5f21-467d-aa3c-66bac9e1b9d8/volumes" Jul 11 00:19:54.577532 kubelet[2773]: E0711 00:19:54.576325 2773 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 11 00:19:55.138379 sshd[4604]: Connection closed by 10.0.0.1 port 52274 Jul 11 00:19:55.136901 sshd-session[4602]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:55.159292 systemd[1]: sshd@31-10.0.0.23:22-10.0.0.1:52274.service: Deactivated successfully. Jul 11 00:19:55.167672 systemd[1]: session-32.scope: Deactivated successfully. Jul 11 00:19:55.173590 systemd-logind[1582]: Session 32 logged out. Waiting for processes to exit. Jul 11 00:19:55.186525 systemd[1]: Started sshd@32-10.0.0.23:22-10.0.0.1:52290.service - OpenSSH per-connection server daemon (10.0.0.1:52290). Jul 11 00:19:55.192639 systemd-logind[1582]: Removed session 32. Jul 11 00:19:55.223953 kubelet[2773]: E0711 00:19:55.222097 2773 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5e30c967-5f21-467d-aa3c-66bac9e1b9d8" containerName="cilium-operator" Jul 11 00:19:55.225437 kubelet[2773]: E0711 00:19:55.224390 2773 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a380c84-61e2-41b6-b55a-1e950e98d990" containerName="mount-bpf-fs" Jul 11 00:19:55.225437 kubelet[2773]: E0711 00:19:55.224431 2773 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a380c84-61e2-41b6-b55a-1e950e98d990" containerName="mount-cgroup" Jul 11 00:19:55.225437 kubelet[2773]: E0711 00:19:55.224444 2773 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a380c84-61e2-41b6-b55a-1e950e98d990" containerName="apply-sysctl-overwrites" Jul 11 00:19:55.225437 kubelet[2773]: E0711 00:19:55.224452 2773 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a380c84-61e2-41b6-b55a-1e950e98d990" containerName="clean-cilium-state" Jul 11 00:19:55.225437 kubelet[2773]: E0711 00:19:55.224467 2773 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a380c84-61e2-41b6-b55a-1e950e98d990" containerName="cilium-agent" Jul 11 00:19:55.225437 kubelet[2773]: I0711 00:19:55.224566 2773 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e30c967-5f21-467d-aa3c-66bac9e1b9d8" containerName="cilium-operator" Jul 11 00:19:55.225437 kubelet[2773]: I0711 00:19:55.224589 2773 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a380c84-61e2-41b6-b55a-1e950e98d990" containerName="cilium-agent" Jul 11 00:19:55.258924 systemd[1]: Created slice kubepods-burstable-pod7cc8ef7d_62d7_4cfe_8038_6c16a375c793.slice - libcontainer container kubepods-burstable-pod7cc8ef7d_62d7_4cfe_8038_6c16a375c793.slice. Jul 11 00:19:55.301299 sshd[4616]: Accepted publickey for core from 10.0.0.1 port 52290 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:19:55.306891 sshd-session[4616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:55.322334 systemd-logind[1582]: New session 33 of user core. Jul 11 00:19:55.340225 systemd[1]: Started session-33.scope - Session 33 of User core. Jul 11 00:19:55.344483 kubelet[2773]: I0711 00:19:55.344231 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7cc8ef7d-62d7-4cfe-8038-6c16a375c793-cni-path\") pod \"cilium-vrpjq\" (UID: \"7cc8ef7d-62d7-4cfe-8038-6c16a375c793\") " pod="kube-system/cilium-vrpjq" Jul 11 00:19:55.344483 kubelet[2773]: I0711 00:19:55.344296 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7cc8ef7d-62d7-4cfe-8038-6c16a375c793-cilium-run\") pod \"cilium-vrpjq\" (UID: \"7cc8ef7d-62d7-4cfe-8038-6c16a375c793\") " pod="kube-system/cilium-vrpjq" Jul 11 00:19:55.344483 kubelet[2773]: I0711 00:19:55.344332 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7cc8ef7d-62d7-4cfe-8038-6c16a375c793-bpf-maps\") pod \"cilium-vrpjq\" (UID: \"7cc8ef7d-62d7-4cfe-8038-6c16a375c793\") " pod="kube-system/cilium-vrpjq" Jul 11 00:19:55.344483 kubelet[2773]: I0711 00:19:55.344355 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7cc8ef7d-62d7-4cfe-8038-6c16a375c793-hubble-tls\") pod \"cilium-vrpjq\" (UID: \"7cc8ef7d-62d7-4cfe-8038-6c16a375c793\") " pod="kube-system/cilium-vrpjq" Jul 11 00:19:55.344483 kubelet[2773]: I0711 00:19:55.344379 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7cc8ef7d-62d7-4cfe-8038-6c16a375c793-cilium-cgroup\") pod \"cilium-vrpjq\" (UID: \"7cc8ef7d-62d7-4cfe-8038-6c16a375c793\") " pod="kube-system/cilium-vrpjq" Jul 11 00:19:55.344483 kubelet[2773]: I0711 00:19:55.344400 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7cc8ef7d-62d7-4cfe-8038-6c16a375c793-host-proc-sys-net\") pod \"cilium-vrpjq\" (UID: \"7cc8ef7d-62d7-4cfe-8038-6c16a375c793\") " pod="kube-system/cilium-vrpjq" Jul 11 00:19:55.345460 kubelet[2773]: I0711 00:19:55.344421 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7cc8ef7d-62d7-4cfe-8038-6c16a375c793-host-proc-sys-kernel\") pod \"cilium-vrpjq\" (UID: \"7cc8ef7d-62d7-4cfe-8038-6c16a375c793\") " pod="kube-system/cilium-vrpjq" Jul 11 00:19:55.345460 kubelet[2773]: I0711 00:19:55.344451 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7cc8ef7d-62d7-4cfe-8038-6c16a375c793-clustermesh-secrets\") pod \"cilium-vrpjq\" (UID: \"7cc8ef7d-62d7-4cfe-8038-6c16a375c793\") " pod="kube-system/cilium-vrpjq" Jul 11 00:19:55.345460 kubelet[2773]: I0711 00:19:55.344470 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7cc8ef7d-62d7-4cfe-8038-6c16a375c793-hostproc\") pod \"cilium-vrpjq\" (UID: \"7cc8ef7d-62d7-4cfe-8038-6c16a375c793\") " pod="kube-system/cilium-vrpjq" Jul 11 00:19:55.345460 kubelet[2773]: I0711 00:19:55.344499 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cc8ef7d-62d7-4cfe-8038-6c16a375c793-xtables-lock\") pod \"cilium-vrpjq\" (UID: \"7cc8ef7d-62d7-4cfe-8038-6c16a375c793\") " pod="kube-system/cilium-vrpjq" Jul 11 00:19:55.345460 kubelet[2773]: I0711 00:19:55.344522 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7cc8ef7d-62d7-4cfe-8038-6c16a375c793-cilium-ipsec-secrets\") pod \"cilium-vrpjq\" (UID: \"7cc8ef7d-62d7-4cfe-8038-6c16a375c793\") " pod="kube-system/cilium-vrpjq" Jul 11 00:19:55.345609 kubelet[2773]: I0711 00:19:55.344545 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grx88\" (UniqueName: \"kubernetes.io/projected/7cc8ef7d-62d7-4cfe-8038-6c16a375c793-kube-api-access-grx88\") pod \"cilium-vrpjq\" (UID: \"7cc8ef7d-62d7-4cfe-8038-6c16a375c793\") " pod="kube-system/cilium-vrpjq" Jul 11 00:19:55.345609 kubelet[2773]: I0711 00:19:55.344568 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7cc8ef7d-62d7-4cfe-8038-6c16a375c793-etc-cni-netd\") pod \"cilium-vrpjq\" (UID: \"7cc8ef7d-62d7-4cfe-8038-6c16a375c793\") " pod="kube-system/cilium-vrpjq" Jul 11 00:19:55.345609 kubelet[2773]: I0711 00:19:55.344587 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cc8ef7d-62d7-4cfe-8038-6c16a375c793-lib-modules\") pod \"cilium-vrpjq\" (UID: \"7cc8ef7d-62d7-4cfe-8038-6c16a375c793\") " pod="kube-system/cilium-vrpjq" Jul 11 00:19:55.345609 kubelet[2773]: I0711 00:19:55.344616 2773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cc8ef7d-62d7-4cfe-8038-6c16a375c793-cilium-config-path\") pod \"cilium-vrpjq\" (UID: \"7cc8ef7d-62d7-4cfe-8038-6c16a375c793\") " pod="kube-system/cilium-vrpjq" Jul 11 00:19:55.405398 sshd[4620]: Connection closed by 10.0.0.1 port 52290 Jul 11 00:19:55.408764 sshd-session[4616]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:55.423923 systemd[1]: sshd@32-10.0.0.23:22-10.0.0.1:52290.service: Deactivated successfully. Jul 11 00:19:55.428955 systemd[1]: session-33.scope: Deactivated successfully. Jul 11 00:19:55.431315 systemd-logind[1582]: Session 33 logged out. Waiting for processes to exit. Jul 11 00:19:55.440647 systemd[1]: Started sshd@33-10.0.0.23:22-10.0.0.1:52306.service - OpenSSH per-connection server daemon (10.0.0.1:52306). Jul 11 00:19:55.442591 systemd-logind[1582]: Removed session 33. Jul 11 00:19:55.545482 sshd[4627]: Accepted publickey for core from 10.0.0.1 port 52306 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:19:55.548159 sshd-session[4627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:55.567051 systemd-logind[1582]: New session 34 of user core. Jul 11 00:19:55.570089 kubelet[2773]: E0711 00:19:55.567561 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:55.573236 containerd[1596]: time="2025-07-11T00:19:55.573040222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vrpjq,Uid:7cc8ef7d-62d7-4cfe-8038-6c16a375c793,Namespace:kube-system,Attempt:0,}" Jul 11 00:19:55.578641 systemd[1]: Started session-34.scope - Session 34 of User core. Jul 11 00:19:55.641156 containerd[1596]: time="2025-07-11T00:19:55.640144645Z" level=info msg="connecting to shim 6c1c1d97181f718ebefe51f6ea20a620ba810cc8c9d6577435e41228a2bfdd26" address="unix:///run/containerd/s/98000c195ab6b9af215ab3a0701d530bb2f6642dfd8b57e97dbff7f267f0da97" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:19:55.730779 systemd[1]: Started cri-containerd-6c1c1d97181f718ebefe51f6ea20a620ba810cc8c9d6577435e41228a2bfdd26.scope - libcontainer container 6c1c1d97181f718ebefe51f6ea20a620ba810cc8c9d6577435e41228a2bfdd26. Jul 11 00:19:55.826308 containerd[1596]: time="2025-07-11T00:19:55.826230857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vrpjq,Uid:7cc8ef7d-62d7-4cfe-8038-6c16a375c793,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c1c1d97181f718ebefe51f6ea20a620ba810cc8c9d6577435e41228a2bfdd26\"" Jul 11 00:19:55.827642 kubelet[2773]: E0711 00:19:55.827589 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:55.839649 containerd[1596]: time="2025-07-11T00:19:55.839545165Z" level=info msg="CreateContainer within sandbox \"6c1c1d97181f718ebefe51f6ea20a620ba810cc8c9d6577435e41228a2bfdd26\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 00:19:55.872212 containerd[1596]: time="2025-07-11T00:19:55.871454278Z" level=info msg="Container c9d626625a144e351078334d80395db0ba37f411627293497214ebdc78b3d61c: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:19:55.892683 containerd[1596]: time="2025-07-11T00:19:55.892585041Z" level=info msg="CreateContainer within sandbox \"6c1c1d97181f718ebefe51f6ea20a620ba810cc8c9d6577435e41228a2bfdd26\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c9d626625a144e351078334d80395db0ba37f411627293497214ebdc78b3d61c\"" Jul 11 00:19:55.894018 containerd[1596]: time="2025-07-11T00:19:55.893900570Z" level=info msg="StartContainer for \"c9d626625a144e351078334d80395db0ba37f411627293497214ebdc78b3d61c\"" Jul 11 00:19:55.895683 containerd[1596]: time="2025-07-11T00:19:55.895610234Z" level=info msg="connecting to shim c9d626625a144e351078334d80395db0ba37f411627293497214ebdc78b3d61c" address="unix:///run/containerd/s/98000c195ab6b9af215ab3a0701d530bb2f6642dfd8b57e97dbff7f267f0da97" protocol=ttrpc version=3 Jul 11 00:19:55.936096 systemd[1]: Started cri-containerd-c9d626625a144e351078334d80395db0ba37f411627293497214ebdc78b3d61c.scope - libcontainer container c9d626625a144e351078334d80395db0ba37f411627293497214ebdc78b3d61c. Jul 11 00:19:56.014435 containerd[1596]: time="2025-07-11T00:19:56.014251158Z" level=info msg="StartContainer for \"c9d626625a144e351078334d80395db0ba37f411627293497214ebdc78b3d61c\" returns successfully" Jul 11 00:19:56.033350 systemd[1]: cri-containerd-c9d626625a144e351078334d80395db0ba37f411627293497214ebdc78b3d61c.scope: Deactivated successfully. Jul 11 00:19:56.036804 containerd[1596]: time="2025-07-11T00:19:56.036625622Z" level=info msg="received exit event container_id:\"c9d626625a144e351078334d80395db0ba37f411627293497214ebdc78b3d61c\" id:\"c9d626625a144e351078334d80395db0ba37f411627293497214ebdc78b3d61c\" pid:4701 exited_at:{seconds:1752193196 nanos:35471068}" Jul 11 00:19:56.037020 containerd[1596]: time="2025-07-11T00:19:56.036926041Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9d626625a144e351078334d80395db0ba37f411627293497214ebdc78b3d61c\" id:\"c9d626625a144e351078334d80395db0ba37f411627293497214ebdc78b3d61c\" pid:4701 exited_at:{seconds:1752193196 nanos:35471068}" Jul 11 00:19:56.914274 kubelet[2773]: E0711 00:19:56.914222 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:56.919315 containerd[1596]: time="2025-07-11T00:19:56.919265580Z" level=info msg="CreateContainer within sandbox \"6c1c1d97181f718ebefe51f6ea20a620ba810cc8c9d6577435e41228a2bfdd26\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 00:19:57.256094 containerd[1596]: time="2025-07-11T00:19:57.255930070Z" level=info msg="Container f4ad1867fa56fe603530a115da04daece39dacc94fe23d08dceed33e52a52ac4: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:19:57.260342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3520337135.mount: Deactivated successfully. Jul 11 00:19:57.301231 containerd[1596]: time="2025-07-11T00:19:57.301150327Z" level=info msg="CreateContainer within sandbox \"6c1c1d97181f718ebefe51f6ea20a620ba810cc8c9d6577435e41228a2bfdd26\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f4ad1867fa56fe603530a115da04daece39dacc94fe23d08dceed33e52a52ac4\"" Jul 11 00:19:57.302737 containerd[1596]: time="2025-07-11T00:19:57.302199572Z" level=info msg="StartContainer for \"f4ad1867fa56fe603530a115da04daece39dacc94fe23d08dceed33e52a52ac4\"" Jul 11 00:19:57.303673 containerd[1596]: time="2025-07-11T00:19:57.303646157Z" level=info msg="connecting to shim f4ad1867fa56fe603530a115da04daece39dacc94fe23d08dceed33e52a52ac4" address="unix:///run/containerd/s/98000c195ab6b9af215ab3a0701d530bb2f6642dfd8b57e97dbff7f267f0da97" protocol=ttrpc version=3 Jul 11 00:19:57.342035 systemd[1]: Started cri-containerd-f4ad1867fa56fe603530a115da04daece39dacc94fe23d08dceed33e52a52ac4.scope - libcontainer container f4ad1867fa56fe603530a115da04daece39dacc94fe23d08dceed33e52a52ac4. Jul 11 00:19:57.399323 systemd[1]: cri-containerd-f4ad1867fa56fe603530a115da04daece39dacc94fe23d08dceed33e52a52ac4.scope: Deactivated successfully. Jul 11 00:19:57.400494 containerd[1596]: time="2025-07-11T00:19:57.400418960Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4ad1867fa56fe603530a115da04daece39dacc94fe23d08dceed33e52a52ac4\" id:\"f4ad1867fa56fe603530a115da04daece39dacc94fe23d08dceed33e52a52ac4\" pid:4746 exited_at:{seconds:1752193197 nanos:399689720}" Jul 11 00:19:57.424796 containerd[1596]: time="2025-07-11T00:19:57.424659453Z" level=info msg="received exit event container_id:\"f4ad1867fa56fe603530a115da04daece39dacc94fe23d08dceed33e52a52ac4\" id:\"f4ad1867fa56fe603530a115da04daece39dacc94fe23d08dceed33e52a52ac4\" pid:4746 exited_at:{seconds:1752193197 nanos:399689720}" Jul 11 00:19:57.426298 containerd[1596]: time="2025-07-11T00:19:57.426255021Z" level=info msg="StartContainer for \"f4ad1867fa56fe603530a115da04daece39dacc94fe23d08dceed33e52a52ac4\" returns successfully" Jul 11 00:19:57.457168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4ad1867fa56fe603530a115da04daece39dacc94fe23d08dceed33e52a52ac4-rootfs.mount: Deactivated successfully. Jul 11 00:19:57.579737 kubelet[2773]: I0711 00:19:57.579563 2773 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-11T00:19:57Z","lastTransitionTime":"2025-07-11T00:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 11 00:19:57.920771 kubelet[2773]: E0711 00:19:57.919529 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:57.922893 containerd[1596]: time="2025-07-11T00:19:57.922823322Z" level=info msg="CreateContainer within sandbox \"6c1c1d97181f718ebefe51f6ea20a620ba810cc8c9d6577435e41228a2bfdd26\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 00:19:58.264997 containerd[1596]: time="2025-07-11T00:19:58.264830531Z" level=info msg="Container d9faeb9c3aeb7a0152abd1fde23224c258230229f974c3c98fede1d4107fc478: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:19:58.293433 containerd[1596]: time="2025-07-11T00:19:58.293357339Z" level=info msg="CreateContainer within sandbox \"6c1c1d97181f718ebefe51f6ea20a620ba810cc8c9d6577435e41228a2bfdd26\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d9faeb9c3aeb7a0152abd1fde23224c258230229f974c3c98fede1d4107fc478\"" Jul 11 00:19:58.294219 containerd[1596]: time="2025-07-11T00:19:58.294050771Z" level=info msg="StartContainer for \"d9faeb9c3aeb7a0152abd1fde23224c258230229f974c3c98fede1d4107fc478\"" Jul 11 00:19:58.297774 containerd[1596]: time="2025-07-11T00:19:58.297672802Z" level=info msg="connecting to shim d9faeb9c3aeb7a0152abd1fde23224c258230229f974c3c98fede1d4107fc478" address="unix:///run/containerd/s/98000c195ab6b9af215ab3a0701d530bb2f6642dfd8b57e97dbff7f267f0da97" protocol=ttrpc version=3 Jul 11 00:19:58.329168 systemd[1]: Started cri-containerd-d9faeb9c3aeb7a0152abd1fde23224c258230229f974c3c98fede1d4107fc478.scope - libcontainer container d9faeb9c3aeb7a0152abd1fde23224c258230229f974c3c98fede1d4107fc478. Jul 11 00:19:58.389558 containerd[1596]: time="2025-07-11T00:19:58.389496255Z" level=info msg="StartContainer for \"d9faeb9c3aeb7a0152abd1fde23224c258230229f974c3c98fede1d4107fc478\" returns successfully" Jul 11 00:19:58.395767 systemd[1]: cri-containerd-d9faeb9c3aeb7a0152abd1fde23224c258230229f974c3c98fede1d4107fc478.scope: Deactivated successfully. Jul 11 00:19:58.397025 containerd[1596]: time="2025-07-11T00:19:58.396976424Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9faeb9c3aeb7a0152abd1fde23224c258230229f974c3c98fede1d4107fc478\" id:\"d9faeb9c3aeb7a0152abd1fde23224c258230229f974c3c98fede1d4107fc478\" pid:4789 exited_at:{seconds:1752193198 nanos:396602005}" Jul 11 00:19:58.397125 containerd[1596]: time="2025-07-11T00:19:58.397078998Z" level=info msg="received exit event container_id:\"d9faeb9c3aeb7a0152abd1fde23224c258230229f974c3c98fede1d4107fc478\" id:\"d9faeb9c3aeb7a0152abd1fde23224c258230229f974c3c98fede1d4107fc478\" pid:4789 exited_at:{seconds:1752193198 nanos:396602005}" Jul 11 00:19:58.427089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9faeb9c3aeb7a0152abd1fde23224c258230229f974c3c98fede1d4107fc478-rootfs.mount: Deactivated successfully. Jul 11 00:19:58.938096 kubelet[2773]: E0711 00:19:58.938027 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:58.940299 containerd[1596]: time="2025-07-11T00:19:58.940260360Z" level=info msg="CreateContainer within sandbox \"6c1c1d97181f718ebefe51f6ea20a620ba810cc8c9d6577435e41228a2bfdd26\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 00:19:59.328533 containerd[1596]: time="2025-07-11T00:19:59.328470653Z" level=info msg="Container ccd6cf3b999c7f14d95aac7191bfba8d3a69b9ab06da8f750af42e083628e56d: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:19:59.418355 containerd[1596]: time="2025-07-11T00:19:59.418294277Z" level=info msg="CreateContainer within sandbox \"6c1c1d97181f718ebefe51f6ea20a620ba810cc8c9d6577435e41228a2bfdd26\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ccd6cf3b999c7f14d95aac7191bfba8d3a69b9ab06da8f750af42e083628e56d\"" Jul 11 00:19:59.419170 containerd[1596]: time="2025-07-11T00:19:59.419127954Z" level=info msg="StartContainer for \"ccd6cf3b999c7f14d95aac7191bfba8d3a69b9ab06da8f750af42e083628e56d\"" Jul 11 00:19:59.420441 containerd[1596]: time="2025-07-11T00:19:59.420396783Z" level=info msg="connecting to shim ccd6cf3b999c7f14d95aac7191bfba8d3a69b9ab06da8f750af42e083628e56d" address="unix:///run/containerd/s/98000c195ab6b9af215ab3a0701d530bb2f6642dfd8b57e97dbff7f267f0da97" protocol=ttrpc version=3 Jul 11 00:19:59.458086 systemd[1]: Started cri-containerd-ccd6cf3b999c7f14d95aac7191bfba8d3a69b9ab06da8f750af42e083628e56d.scope - libcontainer container ccd6cf3b999c7f14d95aac7191bfba8d3a69b9ab06da8f750af42e083628e56d. Jul 11 00:19:59.491800 systemd[1]: cri-containerd-ccd6cf3b999c7f14d95aac7191bfba8d3a69b9ab06da8f750af42e083628e56d.scope: Deactivated successfully. Jul 11 00:19:59.492686 containerd[1596]: time="2025-07-11T00:19:59.492606793Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccd6cf3b999c7f14d95aac7191bfba8d3a69b9ab06da8f750af42e083628e56d\" id:\"ccd6cf3b999c7f14d95aac7191bfba8d3a69b9ab06da8f750af42e083628e56d\" pid:4828 exited_at:{seconds:1752193199 nanos:492221284}" Jul 11 00:19:59.499159 containerd[1596]: time="2025-07-11T00:19:59.499110604Z" level=info msg="received exit event container_id:\"ccd6cf3b999c7f14d95aac7191bfba8d3a69b9ab06da8f750af42e083628e56d\" id:\"ccd6cf3b999c7f14d95aac7191bfba8d3a69b9ab06da8f750af42e083628e56d\" pid:4828 exited_at:{seconds:1752193199 nanos:492221284}" Jul 11 00:19:59.501147 containerd[1596]: time="2025-07-11T00:19:59.501092132Z" level=info msg="StartContainer for \"ccd6cf3b999c7f14d95aac7191bfba8d3a69b9ab06da8f750af42e083628e56d\" returns successfully" Jul 11 00:19:59.527469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccd6cf3b999c7f14d95aac7191bfba8d3a69b9ab06da8f750af42e083628e56d-rootfs.mount: Deactivated successfully. Jul 11 00:19:59.577658 kubelet[2773]: E0711 00:19:59.577598 2773 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 11 00:19:59.945788 kubelet[2773]: E0711 00:19:59.945745 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:59.948793 containerd[1596]: time="2025-07-11T00:19:59.948752175Z" level=info msg="CreateContainer within sandbox \"6c1c1d97181f718ebefe51f6ea20a620ba810cc8c9d6577435e41228a2bfdd26\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 00:19:59.962561 containerd[1596]: time="2025-07-11T00:19:59.961673445Z" level=info msg="Container dea098015cae8a6845be25f6a31fff86fc7498d70cc4451c351473ced7d371a3: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:19:59.973072 containerd[1596]: time="2025-07-11T00:19:59.972989328Z" level=info msg="CreateContainer within sandbox \"6c1c1d97181f718ebefe51f6ea20a620ba810cc8c9d6577435e41228a2bfdd26\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dea098015cae8a6845be25f6a31fff86fc7498d70cc4451c351473ced7d371a3\"" Jul 11 00:19:59.973551 containerd[1596]: time="2025-07-11T00:19:59.973523658Z" level=info msg="StartContainer for \"dea098015cae8a6845be25f6a31fff86fc7498d70cc4451c351473ced7d371a3\"" Jul 11 00:19:59.974996 containerd[1596]: time="2025-07-11T00:19:59.974940448Z" level=info msg="connecting to shim dea098015cae8a6845be25f6a31fff86fc7498d70cc4451c351473ced7d371a3" address="unix:///run/containerd/s/98000c195ab6b9af215ab3a0701d530bb2f6642dfd8b57e97dbff7f267f0da97" protocol=ttrpc version=3 Jul 11 00:20:00.004022 systemd[1]: Started cri-containerd-dea098015cae8a6845be25f6a31fff86fc7498d70cc4451c351473ced7d371a3.scope - libcontainer container dea098015cae8a6845be25f6a31fff86fc7498d70cc4451c351473ced7d371a3. Jul 11 00:20:00.046918 containerd[1596]: time="2025-07-11T00:20:00.046872037Z" level=info msg="StartContainer for \"dea098015cae8a6845be25f6a31fff86fc7498d70cc4451c351473ced7d371a3\" returns successfully" Jul 11 00:20:00.130831 containerd[1596]: time="2025-07-11T00:20:00.130741894Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dea098015cae8a6845be25f6a31fff86fc7498d70cc4451c351473ced7d371a3\" id:\"65111074d9067afb9b4787272f5c1f190a2e953203799b283a983124ea806de2\" pid:4898 exited_at:{seconds:1752193200 nanos:130317622}" Jul 11 00:20:00.519742 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 11 00:20:00.951937 kubelet[2773]: E0711 00:20:00.951888 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:01.954090 kubelet[2773]: E0711 00:20:01.954040 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:02.387494 containerd[1596]: time="2025-07-11T00:20:02.387431894Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dea098015cae8a6845be25f6a31fff86fc7498d70cc4451c351473ced7d371a3\" id:\"e15fd79a40b7412608da00b96e7c1d7937f47713e1a2c3259bf74a95ae014b4a\" pid:5038 exit_status:1 exited_at:{seconds:1752193202 nanos:386880302}" Jul 11 00:20:02.956422 kubelet[2773]: E0711 00:20:02.956345 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:04.098830 systemd-networkd[1484]: lxc_health: Link UP Jul 11 00:20:04.100180 systemd-networkd[1484]: lxc_health: Gained carrier Jul 11 00:20:04.553833 containerd[1596]: time="2025-07-11T00:20:04.553749997Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dea098015cae8a6845be25f6a31fff86fc7498d70cc4451c351473ced7d371a3\" id:\"eae1025bf3d9e33079c77f5a4e5a86edaa464b06737cc14ba87c1269540dd6e4\" pid:5426 exited_at:{seconds:1752193204 nanos:553125224}" Jul 11 00:20:05.569678 kubelet[2773]: E0711 00:20:05.569536 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:05.590322 kubelet[2773]: I0711 00:20:05.590237 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vrpjq" podStartSLOduration=10.590208882 podStartE2EDuration="10.590208882s" podCreationTimestamp="2025-07-11 00:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:20:00.98612359 +0000 UTC m=+136.620723640" watchObservedRunningTime="2025-07-11 00:20:05.590208882 +0000 UTC m=+141.224808932" Jul 11 00:20:05.963189 kubelet[2773]: E0711 00:20:05.963038 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:05.969248 systemd-networkd[1484]: lxc_health: Gained IPv6LL Jul 11 00:20:06.663925 containerd[1596]: time="2025-07-11T00:20:06.663862571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dea098015cae8a6845be25f6a31fff86fc7498d70cc4451c351473ced7d371a3\" id:\"81003d2844fca0ed83b0974cf54ee96a6990eed1a5936a9f234c3b0dec60968c\" pid:5467 exited_at:{seconds:1752193206 nanos:663429802}" Jul 11 00:20:06.964717 kubelet[2773]: E0711 00:20:06.964551 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:08.786597 containerd[1596]: time="2025-07-11T00:20:08.786459898Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dea098015cae8a6845be25f6a31fff86fc7498d70cc4451c351473ced7d371a3\" id:\"4cc7f8b8721492277b58520fbb64076a04f8c38c5cf92d31ca051bd48b4a2d12\" pid:5498 exited_at:{seconds:1752193208 nanos:786052878}" Jul 11 00:20:10.992774 containerd[1596]: time="2025-07-11T00:20:10.992680236Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dea098015cae8a6845be25f6a31fff86fc7498d70cc4451c351473ced7d371a3\" id:\"ae50f35a9be03b07cae5c757e373dda9f9c9ef613c6d01cef61c42fe9a1b43a8\" pid:5522 exited_at:{seconds:1752193210 nanos:992176583}" Jul 11 00:20:11.000210 sshd[4635]: Connection closed by 10.0.0.1 port 52306 Jul 11 00:20:11.057387 sshd-session[4627]: pam_unix(sshd:session): session closed for user core Jul 11 00:20:11.062543 systemd[1]: sshd@33-10.0.0.23:22-10.0.0.1:52306.service: Deactivated successfully. Jul 11 00:20:11.065030 systemd[1]: session-34.scope: Deactivated successfully. Jul 11 00:20:11.065891 systemd-logind[1582]: Session 34 logged out. Waiting for processes to exit. Jul 11 00:20:11.067351 systemd-logind[1582]: Removed session 34.