Mar 20 21:26:06.870370 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 20 19:36:47 -00 2025 Mar 20 21:26:06.870399 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=619bfa043b53ac975036e415994a80721794ae8277072d0a93c174b4f7768019 Mar 20 21:26:06.870414 kernel: BIOS-provided physical RAM map: Mar 20 21:26:06.870423 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 20 21:26:06.870432 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 20 21:26:06.870440 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 20 21:26:06.870450 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 20 21:26:06.870459 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 20 21:26:06.870468 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 20 21:26:06.870477 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 20 21:26:06.870489 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 20 21:26:06.870498 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 20 21:26:06.870506 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 20 21:26:06.870515 kernel: NX (Execute Disable) protection: active Mar 20 21:26:06.870526 kernel: APIC: Static calls initialized Mar 20 21:26:06.870538 kernel: SMBIOS 2.8 present. Mar 20 21:26:06.870548 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 20 21:26:06.870558 kernel: Hypervisor detected: KVM Mar 20 21:26:06.870567 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 20 21:26:06.870576 kernel: kvm-clock: using sched offset of 2310098005 cycles Mar 20 21:26:06.870587 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 20 21:26:06.870597 kernel: tsc: Detected 2794.750 MHz processor Mar 20 21:26:06.870607 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 20 21:26:06.870617 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 20 21:26:06.870646 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 20 21:26:06.870660 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 20 21:26:06.870670 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 20 21:26:06.870680 kernel: Using GB pages for direct mapping Mar 20 21:26:06.870690 kernel: ACPI: Early table checksum verification disabled Mar 20 21:26:06.870700 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 20 21:26:06.870710 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:26:06.870720 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:26:06.870730 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:26:06.870740 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 20 21:26:06.870752 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:26:06.870762 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:26:06.870772 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:26:06.870782 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:26:06.870792 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Mar 20 21:26:06.870802 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Mar 20 21:26:06.870816 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 20 21:26:06.870828 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Mar 20 21:26:06.870838 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Mar 20 21:26:06.870849 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Mar 20 21:26:06.870859 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Mar 20 21:26:06.870869 kernel: No NUMA configuration found Mar 20 21:26:06.870879 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 20 21:26:06.870889 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 20 21:26:06.870902 kernel: Zone ranges: Mar 20 21:26:06.870912 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 20 21:26:06.870922 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 20 21:26:06.870932 kernel: Normal empty Mar 20 21:26:06.870942 kernel: Movable zone start for each node Mar 20 21:26:06.870952 kernel: Early memory node ranges Mar 20 21:26:06.870962 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 20 21:26:06.870973 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 20 21:26:06.870983 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 20 21:26:06.870997 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 20 21:26:06.871007 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 20 21:26:06.871018 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 20 21:26:06.871028 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 20 21:26:06.871039 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 20 21:26:06.871049 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 20 21:26:06.871059 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 20 21:26:06.871069 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 20 21:26:06.871080 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 20 21:26:06.871090 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 20 21:26:06.871104 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 20 21:26:06.871114 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 20 21:26:06.871124 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 20 21:26:06.871134 kernel: TSC deadline timer available Mar 20 21:26:06.871144 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 20 21:26:06.871154 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 20 21:26:06.871165 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 20 21:26:06.871175 kernel: kvm-guest: setup PV sched yield Mar 20 21:26:06.871185 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 20 21:26:06.871199 kernel: Booting paravirtualized kernel on KVM Mar 20 21:26:06.871209 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 20 21:26:06.871220 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 20 21:26:06.871229 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Mar 20 21:26:06.871237 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Mar 20 21:26:06.871244 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 20 21:26:06.871251 kernel: kvm-guest: PV spinlocks enabled Mar 20 21:26:06.871259 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 20 21:26:06.871268 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=619bfa043b53ac975036e415994a80721794ae8277072d0a93c174b4f7768019 Mar 20 21:26:06.871279 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 20 21:26:06.871286 kernel: random: crng init done Mar 20 21:26:06.871294 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 20 21:26:06.871302 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 20 21:26:06.871310 kernel: Fallback order for Node 0: 0 Mar 20 21:26:06.871318 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 20 21:26:06.871325 kernel: Policy zone: DMA32 Mar 20 21:26:06.871341 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 20 21:26:06.871351 kernel: Memory: 2430492K/2571752K available (14336K kernel code, 2304K rwdata, 25060K rodata, 43592K init, 1472K bss, 141000K reserved, 0K cma-reserved) Mar 20 21:26:06.871360 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 20 21:26:06.871367 kernel: ftrace: allocating 37985 entries in 149 pages Mar 20 21:26:06.871375 kernel: ftrace: allocated 149 pages with 4 groups Mar 20 21:26:06.871382 kernel: Dynamic Preempt: voluntary Mar 20 21:26:06.871390 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 20 21:26:06.871398 kernel: rcu: RCU event tracing is enabled. Mar 20 21:26:06.871406 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 20 21:26:06.871414 kernel: Trampoline variant of Tasks RCU enabled. Mar 20 21:26:06.871425 kernel: Rude variant of Tasks RCU enabled. Mar 20 21:26:06.871434 kernel: Tracing variant of Tasks RCU enabled. Mar 20 21:26:06.871443 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 20 21:26:06.871452 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 20 21:26:06.871460 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 20 21:26:06.871467 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 20 21:26:06.871475 kernel: Console: colour VGA+ 80x25 Mar 20 21:26:06.871482 kernel: printk: console [ttyS0] enabled Mar 20 21:26:06.871490 kernel: ACPI: Core revision 20230628 Mar 20 21:26:06.871499 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 20 21:26:06.871507 kernel: APIC: Switch to symmetric I/O mode setup Mar 20 21:26:06.871515 kernel: x2apic enabled Mar 20 21:26:06.871522 kernel: APIC: Switched APIC routing to: physical x2apic Mar 20 21:26:06.871530 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 20 21:26:06.871538 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 20 21:26:06.871545 kernel: kvm-guest: setup PV IPIs Mar 20 21:26:06.871562 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 20 21:26:06.871570 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 20 21:26:06.871578 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Mar 20 21:26:06.871585 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 20 21:26:06.871593 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 20 21:26:06.871603 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 20 21:26:06.871611 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 20 21:26:06.871619 kernel: Spectre V2 : Mitigation: Retpolines Mar 20 21:26:06.871707 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 20 21:26:06.871717 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 20 21:26:06.871728 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 20 21:26:06.871736 kernel: RETBleed: Mitigation: untrained return thunk Mar 20 21:26:06.871744 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 20 21:26:06.871752 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 20 21:26:06.871760 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 20 21:26:06.871769 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 20 21:26:06.871777 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 20 21:26:06.871785 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 20 21:26:06.871795 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 20 21:26:06.871803 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 20 21:26:06.871810 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 20 21:26:06.871818 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 20 21:26:06.871826 kernel: Freeing SMP alternatives memory: 32K Mar 20 21:26:06.871834 kernel: pid_max: default: 32768 minimum: 301 Mar 20 21:26:06.871842 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 20 21:26:06.871850 kernel: landlock: Up and running. Mar 20 21:26:06.871858 kernel: SELinux: Initializing. Mar 20 21:26:06.871868 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 21:26:06.871875 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 21:26:06.871883 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 20 21:26:06.871891 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 21:26:06.871899 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 21:26:06.871907 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 21:26:06.871915 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 20 21:26:06.871923 kernel: ... version: 0 Mar 20 21:26:06.871931 kernel: ... bit width: 48 Mar 20 21:26:06.871941 kernel: ... generic registers: 6 Mar 20 21:26:06.871949 kernel: ... value mask: 0000ffffffffffff Mar 20 21:26:06.871957 kernel: ... max period: 00007fffffffffff Mar 20 21:26:06.871964 kernel: ... fixed-purpose events: 0 Mar 20 21:26:06.871972 kernel: ... event mask: 000000000000003f Mar 20 21:26:06.871980 kernel: signal: max sigframe size: 1776 Mar 20 21:26:06.871988 kernel: rcu: Hierarchical SRCU implementation. Mar 20 21:26:06.871996 kernel: rcu: Max phase no-delay instances is 400. Mar 20 21:26:06.872004 kernel: smp: Bringing up secondary CPUs ... Mar 20 21:26:06.872013 kernel: smpboot: x86: Booting SMP configuration: Mar 20 21:26:06.872021 kernel: .... node #0, CPUs: #1 #2 #3 Mar 20 21:26:06.872029 kernel: smp: Brought up 1 node, 4 CPUs Mar 20 21:26:06.872037 kernel: smpboot: Max logical packages: 1 Mar 20 21:26:06.872045 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Mar 20 21:26:06.872052 kernel: devtmpfs: initialized Mar 20 21:26:06.872060 kernel: x86/mm: Memory block size: 128MB Mar 20 21:26:06.872068 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 20 21:26:06.872076 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 20 21:26:06.872086 kernel: pinctrl core: initialized pinctrl subsystem Mar 20 21:26:06.872094 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 20 21:26:06.872102 kernel: audit: initializing netlink subsys (disabled) Mar 20 21:26:06.872110 kernel: audit: type=2000 audit(1742505966.467:1): state=initialized audit_enabled=0 res=1 Mar 20 21:26:06.872117 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 20 21:26:06.872125 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 20 21:26:06.872133 kernel: cpuidle: using governor menu Mar 20 21:26:06.872141 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 20 21:26:06.872149 kernel: dca service started, version 1.12.1 Mar 20 21:26:06.872159 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 20 21:26:06.872167 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 20 21:26:06.872175 kernel: PCI: Using configuration type 1 for base access Mar 20 21:26:06.872183 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 20 21:26:06.872191 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 20 21:26:06.872199 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 20 21:26:06.872206 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 20 21:26:06.872214 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 20 21:26:06.872222 kernel: ACPI: Added _OSI(Module Device) Mar 20 21:26:06.872232 kernel: ACPI: Added _OSI(Processor Device) Mar 20 21:26:06.872240 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 20 21:26:06.872247 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 20 21:26:06.872255 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 20 21:26:06.872263 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 20 21:26:06.872271 kernel: ACPI: Interpreter enabled Mar 20 21:26:06.872279 kernel: ACPI: PM: (supports S0 S3 S5) Mar 20 21:26:06.872287 kernel: ACPI: Using IOAPIC for interrupt routing Mar 20 21:26:06.872295 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 20 21:26:06.872304 kernel: PCI: Using E820 reservations for host bridge windows Mar 20 21:26:06.872312 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 20 21:26:06.872320 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 20 21:26:06.872505 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 20 21:26:06.872655 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 20 21:26:06.872794 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 20 21:26:06.872805 kernel: PCI host bridge to bus 0000:00 Mar 20 21:26:06.872936 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 20 21:26:06.873050 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 20 21:26:06.873163 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 20 21:26:06.873274 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 20 21:26:06.873395 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 20 21:26:06.873507 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 20 21:26:06.873619 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 20 21:26:06.873846 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 20 21:26:06.873981 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 20 21:26:06.874103 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 20 21:26:06.874223 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 20 21:26:06.874353 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 20 21:26:06.874480 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 20 21:26:06.874619 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 20 21:26:06.874782 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 20 21:26:06.874907 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 20 21:26:06.875030 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 20 21:26:06.875162 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 20 21:26:06.875287 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 20 21:26:06.875420 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 20 21:26:06.875548 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 20 21:26:06.875711 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 20 21:26:06.875841 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 20 21:26:06.875963 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 20 21:26:06.876086 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 20 21:26:06.876208 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 20 21:26:06.876348 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 20 21:26:06.876479 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 20 21:26:06.876609 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 20 21:26:06.876763 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 20 21:26:06.876887 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 20 21:26:06.877017 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 20 21:26:06.877139 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 20 21:26:06.877151 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 20 21:26:06.877163 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 20 21:26:06.877171 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 20 21:26:06.877179 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 20 21:26:06.877186 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 20 21:26:06.877194 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 20 21:26:06.877202 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 20 21:26:06.877210 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 20 21:26:06.877218 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 20 21:26:06.877226 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 20 21:26:06.877236 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 20 21:26:06.877244 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 20 21:26:06.877252 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 20 21:26:06.877260 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 20 21:26:06.877268 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 20 21:26:06.877276 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 20 21:26:06.877284 kernel: iommu: Default domain type: Translated Mar 20 21:26:06.877292 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 20 21:26:06.877300 kernel: PCI: Using ACPI for IRQ routing Mar 20 21:26:06.877310 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 20 21:26:06.877318 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 20 21:26:06.877325 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 20 21:26:06.877464 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 20 21:26:06.877587 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 20 21:26:06.877780 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 20 21:26:06.877793 kernel: vgaarb: loaded Mar 20 21:26:06.877801 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 20 21:26:06.877813 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 20 21:26:06.877821 kernel: clocksource: Switched to clocksource kvm-clock Mar 20 21:26:06.877829 kernel: VFS: Disk quotas dquot_6.6.0 Mar 20 21:26:06.877837 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 20 21:26:06.877845 kernel: pnp: PnP ACPI init Mar 20 21:26:06.877979 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 20 21:26:06.877991 kernel: pnp: PnP ACPI: found 6 devices Mar 20 21:26:06.877999 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 20 21:26:06.878010 kernel: NET: Registered PF_INET protocol family Mar 20 21:26:06.878018 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 20 21:26:06.878026 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 20 21:26:06.878034 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 20 21:26:06.878042 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 20 21:26:06.878050 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 20 21:26:06.878058 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 20 21:26:06.878066 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 21:26:06.878074 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 21:26:06.878085 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 20 21:26:06.878092 kernel: NET: Registered PF_XDP protocol family Mar 20 21:26:06.878207 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 20 21:26:06.878319 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 20 21:26:06.878444 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 20 21:26:06.878558 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 20 21:26:06.878693 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 20 21:26:06.878809 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 20 21:26:06.878824 kernel: PCI: CLS 0 bytes, default 64 Mar 20 21:26:06.878832 kernel: Initialise system trusted keyrings Mar 20 21:26:06.878840 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 20 21:26:06.878848 kernel: Key type asymmetric registered Mar 20 21:26:06.878856 kernel: Asymmetric key parser 'x509' registered Mar 20 21:26:06.878864 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 20 21:26:06.878871 kernel: io scheduler mq-deadline registered Mar 20 21:26:06.878879 kernel: io scheduler kyber registered Mar 20 21:26:06.878887 kernel: io scheduler bfq registered Mar 20 21:26:06.878895 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 20 21:26:06.878906 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 20 21:26:06.878914 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 20 21:26:06.878922 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 20 21:26:06.878930 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 20 21:26:06.878938 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 20 21:26:06.878946 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 20 21:26:06.878954 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 20 21:26:06.878962 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 20 21:26:06.878970 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 20 21:26:06.879103 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 20 21:26:06.879219 kernel: rtc_cmos 00:04: registered as rtc0 Mar 20 21:26:06.879343 kernel: rtc_cmos 00:04: setting system clock to 2025-03-20T21:26:06 UTC (1742505966) Mar 20 21:26:06.879460 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 20 21:26:06.879470 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 20 21:26:06.879478 kernel: NET: Registered PF_INET6 protocol family Mar 20 21:26:06.879486 kernel: Segment Routing with IPv6 Mar 20 21:26:06.879498 kernel: In-situ OAM (IOAM) with IPv6 Mar 20 21:26:06.879506 kernel: NET: Registered PF_PACKET protocol family Mar 20 21:26:06.879514 kernel: Key type dns_resolver registered Mar 20 21:26:06.879521 kernel: IPI shorthand broadcast: enabled Mar 20 21:26:06.879529 kernel: sched_clock: Marking stable (569002309, 103217172)->(684404589, -12185108) Mar 20 21:26:06.879537 kernel: registered taskstats version 1 Mar 20 21:26:06.879545 kernel: Loading compiled-in X.509 certificates Mar 20 21:26:06.879553 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 9e7923b67df1c6f0613bc4380f7ea8de9ce851ac' Mar 20 21:26:06.879561 kernel: Key type .fscrypt registered Mar 20 21:26:06.879571 kernel: Key type fscrypt-provisioning registered Mar 20 21:26:06.879579 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 20 21:26:06.879587 kernel: ima: Allocated hash algorithm: sha1 Mar 20 21:26:06.879595 kernel: ima: No architecture policies found Mar 20 21:26:06.879603 kernel: clk: Disabling unused clocks Mar 20 21:26:06.879611 kernel: Freeing unused kernel image (initmem) memory: 43592K Mar 20 21:26:06.879619 kernel: Write protecting the kernel read-only data: 40960k Mar 20 21:26:06.879681 kernel: Freeing unused kernel image (rodata/data gap) memory: 1564K Mar 20 21:26:06.879690 kernel: Run /init as init process Mar 20 21:26:06.879701 kernel: with arguments: Mar 20 21:26:06.879709 kernel: /init Mar 20 21:26:06.879717 kernel: with environment: Mar 20 21:26:06.879725 kernel: HOME=/ Mar 20 21:26:06.879732 kernel: TERM=linux Mar 20 21:26:06.879740 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 20 21:26:06.879749 systemd[1]: Successfully made /usr/ read-only. Mar 20 21:26:06.879760 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 21:26:06.879772 systemd[1]: Detected virtualization kvm. Mar 20 21:26:06.879780 systemd[1]: Detected architecture x86-64. Mar 20 21:26:06.879788 systemd[1]: Running in initrd. Mar 20 21:26:06.879796 systemd[1]: No hostname configured, using default hostname. Mar 20 21:26:06.879805 systemd[1]: Hostname set to . Mar 20 21:26:06.879813 systemd[1]: Initializing machine ID from VM UUID. Mar 20 21:26:06.879822 systemd[1]: Queued start job for default target initrd.target. Mar 20 21:26:06.879830 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:26:06.879841 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:26:06.879861 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 20 21:26:06.879872 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 21:26:06.879881 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 20 21:26:06.879891 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 20 21:26:06.879903 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 20 21:26:06.879912 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 20 21:26:06.879920 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:26:06.879929 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:26:06.879938 systemd[1]: Reached target paths.target - Path Units. Mar 20 21:26:06.879946 systemd[1]: Reached target slices.target - Slice Units. Mar 20 21:26:06.879955 systemd[1]: Reached target swap.target - Swaps. Mar 20 21:26:06.879964 systemd[1]: Reached target timers.target - Timer Units. Mar 20 21:26:06.879974 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 21:26:06.879983 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 21:26:06.879992 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 20 21:26:06.880001 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 20 21:26:06.880010 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:26:06.880019 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 21:26:06.880027 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:26:06.880036 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 21:26:06.880045 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 20 21:26:06.880056 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 21:26:06.880065 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 20 21:26:06.880073 systemd[1]: Starting systemd-fsck-usr.service... Mar 20 21:26:06.880082 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 21:26:06.880091 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 21:26:06.880100 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:26:06.880108 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 20 21:26:06.880117 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:26:06.880129 systemd[1]: Finished systemd-fsck-usr.service. Mar 20 21:26:06.880138 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 20 21:26:06.880172 systemd-journald[193]: Collecting audit messages is disabled. Mar 20 21:26:06.880194 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 21:26:06.880204 systemd-journald[193]: Journal started Mar 20 21:26:06.880225 systemd-journald[193]: Runtime Journal (/run/log/journal/c72abe040f7c4c129dc6fa0f88faa653) is 6M, max 48.3M, 42.3M free. Mar 20 21:26:06.864319 systemd-modules-load[195]: Inserted module 'overlay' Mar 20 21:26:06.906670 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 20 21:26:06.906695 kernel: Bridge firewalling registered Mar 20 21:26:06.906709 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 21:26:06.890440 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 20 21:26:06.898686 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 21:26:06.899182 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:26:06.900524 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:26:06.901828 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:26:06.905316 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 21:26:06.923183 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 21:26:06.936737 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:26:06.937532 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:26:06.940122 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:26:06.942430 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:26:06.946909 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 20 21:26:06.948876 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 21:26:06.971719 dracut-cmdline[230]: dracut-dracut-053 Mar 20 21:26:06.974587 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=619bfa043b53ac975036e415994a80721794ae8277072d0a93c174b4f7768019 Mar 20 21:26:06.993018 systemd-resolved[231]: Positive Trust Anchors: Mar 20 21:26:06.993033 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 21:26:06.993063 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 21:26:06.995479 systemd-resolved[231]: Defaulting to hostname 'linux'. Mar 20 21:26:06.996516 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 21:26:07.002081 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:26:07.063657 kernel: SCSI subsystem initialized Mar 20 21:26:07.072647 kernel: Loading iSCSI transport class v2.0-870. Mar 20 21:26:07.082652 kernel: iscsi: registered transport (tcp) Mar 20 21:26:07.103658 kernel: iscsi: registered transport (qla4xxx) Mar 20 21:26:07.103686 kernel: QLogic iSCSI HBA Driver Mar 20 21:26:07.145562 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 20 21:26:07.147709 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 20 21:26:07.184956 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 20 21:26:07.184991 kernel: device-mapper: uevent: version 1.0.3 Mar 20 21:26:07.185960 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 20 21:26:07.224656 kernel: raid6: avx2x4 gen() 28457 MB/s Mar 20 21:26:07.241653 kernel: raid6: avx2x2 gen() 31280 MB/s Mar 20 21:26:07.258754 kernel: raid6: avx2x1 gen() 25825 MB/s Mar 20 21:26:07.258774 kernel: raid6: using algorithm avx2x2 gen() 31280 MB/s Mar 20 21:26:07.276750 kernel: raid6: .... xor() 19507 MB/s, rmw enabled Mar 20 21:26:07.276782 kernel: raid6: using avx2x2 recovery algorithm Mar 20 21:26:07.297653 kernel: xor: automatically using best checksumming function avx Mar 20 21:26:07.443664 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 20 21:26:07.456059 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 20 21:26:07.459137 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:26:07.490543 systemd-udevd[414]: Using default interface naming scheme 'v255'. Mar 20 21:26:07.495730 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:26:07.499433 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 20 21:26:07.524805 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Mar 20 21:26:07.560336 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 21:26:07.564006 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 21:26:07.640200 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:26:07.644908 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 20 21:26:07.667980 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 20 21:26:07.669915 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 21:26:07.672722 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:26:07.675068 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 21:26:07.679646 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 20 21:26:07.710092 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 20 21:26:07.710245 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 20 21:26:07.710257 kernel: GPT:9289727 != 19775487 Mar 20 21:26:07.710268 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 20 21:26:07.710279 kernel: GPT:9289727 != 19775487 Mar 20 21:26:07.710294 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 20 21:26:07.710305 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:26:07.710323 kernel: cryptd: max_cpu_qlen set to 1000 Mar 20 21:26:07.710333 kernel: libata version 3.00 loaded. Mar 20 21:26:07.680008 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 20 21:26:07.714429 kernel: AVX2 version of gcm_enc/dec engaged. Mar 20 21:26:07.714451 kernel: AES CTR mode by8 optimization enabled Mar 20 21:26:07.714927 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 21:26:07.718007 kernel: ahci 0000:00:1f.2: version 3.0 Mar 20 21:26:07.737486 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 20 21:26:07.737507 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 20 21:26:07.737728 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 20 21:26:07.737900 kernel: scsi host0: ahci Mar 20 21:26:07.738078 kernel: scsi host1: ahci Mar 20 21:26:07.738226 kernel: scsi host2: ahci Mar 20 21:26:07.738384 kernel: scsi host3: ahci Mar 20 21:26:07.738550 kernel: scsi host4: ahci Mar 20 21:26:07.738763 kernel: scsi host5: ahci Mar 20 21:26:07.738906 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 20 21:26:07.738918 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 20 21:26:07.738936 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 20 21:26:07.738949 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 20 21:26:07.738963 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 20 21:26:07.738976 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 20 21:26:07.738989 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (473) Mar 20 21:26:07.715068 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:26:07.743331 kernel: BTRFS: device fsid 48a514e8-9ecc-46c2-935b-caca347f921e devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (460) Mar 20 21:26:07.719717 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:26:07.721139 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 21:26:07.721266 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:26:07.723511 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:26:07.729734 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:26:07.738129 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 20 21:26:07.767571 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 20 21:26:07.793996 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:26:07.810136 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 20 21:26:07.824914 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 21:26:07.832065 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 20 21:26:07.832334 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 20 21:26:07.837674 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 20 21:26:07.840576 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:26:07.862810 disk-uuid[556]: Primary Header is updated. Mar 20 21:26:07.862810 disk-uuid[556]: Secondary Entries is updated. Mar 20 21:26:07.862810 disk-uuid[556]: Secondary Header is updated. Mar 20 21:26:07.865654 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:26:07.877797 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:26:08.048957 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 20 21:26:08.049052 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 20 21:26:08.049068 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 20 21:26:08.050670 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 20 21:26:08.050759 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 20 21:26:08.051781 kernel: ata3.00: applying bridge limits Mar 20 21:26:08.052659 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 20 21:26:08.052684 kernel: ata3.00: configured for UDMA/100 Mar 20 21:26:08.053668 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 20 21:26:08.058659 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 20 21:26:08.114668 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 20 21:26:08.128318 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 20 21:26:08.128333 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 20 21:26:08.875460 disk-uuid[558]: The operation has completed successfully. Mar 20 21:26:08.876926 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:26:08.908346 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 20 21:26:08.908483 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 20 21:26:08.946795 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 20 21:26:08.967737 sh[592]: Success Mar 20 21:26:08.981685 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 20 21:26:09.018122 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 20 21:26:09.021690 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 20 21:26:09.046182 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 20 21:26:09.051675 kernel: BTRFS info (device dm-0): first mount of filesystem 48a514e8-9ecc-46c2-935b-caca347f921e Mar 20 21:26:09.051702 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 20 21:26:09.051713 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 20 21:26:09.052682 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 20 21:26:09.054012 kernel: BTRFS info (device dm-0): using free space tree Mar 20 21:26:09.057939 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 20 21:26:09.058835 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 20 21:26:09.059757 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 20 21:26:09.060865 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 20 21:26:09.087379 kernel: BTRFS info (device vda6): first mount of filesystem c415ef49-5595-4a0b-ba48-8f3e642f303e Mar 20 21:26:09.087435 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 20 21:26:09.087447 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:26:09.090659 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:26:09.094676 kernel: BTRFS info (device vda6): last unmount of filesystem c415ef49-5595-4a0b-ba48-8f3e642f303e Mar 20 21:26:09.105358 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 20 21:26:09.107692 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 20 21:26:09.166353 ignition[687]: Ignition 2.20.0 Mar 20 21:26:09.166365 ignition[687]: Stage: fetch-offline Mar 20 21:26:09.166397 ignition[687]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:26:09.166406 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:26:09.166503 ignition[687]: parsed url from cmdline: "" Mar 20 21:26:09.166507 ignition[687]: no config URL provided Mar 20 21:26:09.166513 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Mar 20 21:26:09.166521 ignition[687]: no config at "/usr/lib/ignition/user.ign" Mar 20 21:26:09.166547 ignition[687]: op(1): [started] loading QEMU firmware config module Mar 20 21:26:09.166552 ignition[687]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 20 21:26:09.176005 ignition[687]: op(1): [finished] loading QEMU firmware config module Mar 20 21:26:09.188612 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 21:26:09.191221 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 21:26:09.235391 ignition[687]: parsing config with SHA512: bc83297f1d5f8d76f37e80b6bf1c548f709c710d6a4763bfba438be6543b7f66a07959d494d1a0e90f7e7372baf3f7adfe28b0e5ce665336261acda614eb7853 Mar 20 21:26:09.236882 systemd-networkd[778]: lo: Link UP Mar 20 21:26:09.236894 systemd-networkd[778]: lo: Gained carrier Mar 20 21:26:09.238574 systemd-networkd[778]: Enumeration completed Mar 20 21:26:09.238933 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:26:09.238937 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 21:26:09.239421 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 21:26:09.240295 systemd-networkd[778]: eth0: Link UP Mar 20 21:26:09.240299 systemd-networkd[778]: eth0: Gained carrier Mar 20 21:26:09.240306 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:26:09.240394 systemd[1]: Reached target network.target - Network. Mar 20 21:26:09.254360 unknown[687]: fetched base config from "system" Mar 20 21:26:09.254373 unknown[687]: fetched user config from "qemu" Mar 20 21:26:09.254776 ignition[687]: fetch-offline: fetch-offline passed Mar 20 21:26:09.255677 systemd-networkd[778]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 21:26:09.254844 ignition[687]: Ignition finished successfully Mar 20 21:26:09.257590 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 21:26:09.258536 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 20 21:26:09.259380 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 20 21:26:09.285964 ignition[783]: Ignition 2.20.0 Mar 20 21:26:09.285976 ignition[783]: Stage: kargs Mar 20 21:26:09.286120 ignition[783]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:26:09.286132 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:26:09.286974 ignition[783]: kargs: kargs passed Mar 20 21:26:09.287014 ignition[783]: Ignition finished successfully Mar 20 21:26:09.293237 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 20 21:26:09.295334 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 20 21:26:09.316006 ignition[792]: Ignition 2.20.0 Mar 20 21:26:09.316018 ignition[792]: Stage: disks Mar 20 21:26:09.316176 ignition[792]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:26:09.316188 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:26:09.317049 ignition[792]: disks: disks passed Mar 20 21:26:09.318623 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 20 21:26:09.317092 ignition[792]: Ignition finished successfully Mar 20 21:26:09.320860 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 20 21:26:09.321970 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 20 21:26:09.324042 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 21:26:09.325047 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 21:26:09.326747 systemd[1]: Reached target basic.target - Basic System. Mar 20 21:26:09.328056 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 20 21:26:09.368548 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 20 21:26:09.728235 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 20 21:26:09.731818 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 20 21:26:09.853655 kernel: EXT4-fs (vda9): mounted filesystem 79cdbe74-6884-4c57-b04d-c9a431509f16 r/w with ordered data mode. Quota mode: none. Mar 20 21:26:09.853902 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 20 21:26:09.856074 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 20 21:26:09.859264 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 21:26:09.861672 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 20 21:26:09.863605 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 20 21:26:09.863660 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 20 21:26:09.863682 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 21:26:09.871583 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 20 21:26:09.875042 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 20 21:26:09.878756 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (810) Mar 20 21:26:09.878780 kernel: BTRFS info (device vda6): first mount of filesystem c415ef49-5595-4a0b-ba48-8f3e642f303e Mar 20 21:26:09.879999 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 20 21:26:09.880013 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:26:09.883648 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:26:09.885272 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 21:26:09.912833 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Mar 20 21:26:09.917728 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Mar 20 21:26:09.926727 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Mar 20 21:26:09.930555 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Mar 20 21:26:10.013718 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 20 21:26:10.031174 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 20 21:26:10.033770 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 20 21:26:10.051194 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 20 21:26:10.071049 kernel: BTRFS info (device vda6): last unmount of filesystem c415ef49-5595-4a0b-ba48-8f3e642f303e Mar 20 21:26:10.130352 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 20 21:26:10.227509 ignition[927]: INFO : Ignition 2.20.0 Mar 20 21:26:10.227509 ignition[927]: INFO : Stage: mount Mar 20 21:26:10.239822 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:26:10.239822 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:26:10.239822 ignition[927]: INFO : mount: mount passed Mar 20 21:26:10.239822 ignition[927]: INFO : Ignition finished successfully Mar 20 21:26:10.245735 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 20 21:26:10.247202 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 20 21:26:10.292870 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 21:26:10.323587 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (936) Mar 20 21:26:10.323644 kernel: BTRFS info (device vda6): first mount of filesystem c415ef49-5595-4a0b-ba48-8f3e642f303e Mar 20 21:26:10.323656 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 20 21:26:10.324462 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:26:10.327649 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:26:10.329196 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 21:26:10.359730 ignition[953]: INFO : Ignition 2.20.0 Mar 20 21:26:10.359730 ignition[953]: INFO : Stage: files Mar 20 21:26:10.361617 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:26:10.361617 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:26:10.361617 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Mar 20 21:26:10.365119 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 20 21:26:10.365119 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 20 21:26:10.367869 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 20 21:26:10.369309 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 20 21:26:10.371092 unknown[953]: wrote ssh authorized keys file for user: core Mar 20 21:26:10.372541 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 20 21:26:10.374602 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 20 21:26:10.376542 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 20 21:26:10.446046 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 20 21:26:10.559556 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 20 21:26:10.559556 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 20 21:26:10.564239 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 20 21:26:11.037560 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 20 21:26:11.120850 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 20 21:26:11.120850 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 20 21:26:11.124565 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 20 21:26:11.124565 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 20 21:26:11.124565 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 20 21:26:11.124565 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 21:26:11.124565 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 21:26:11.124565 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 21:26:11.124565 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 21:26:11.124565 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 21:26:11.124565 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 21:26:11.124565 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 20 21:26:11.124565 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 20 21:26:11.124565 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 20 21:26:11.124565 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 20 21:26:11.257863 systemd-networkd[778]: eth0: Gained IPv6LL Mar 20 21:26:11.411434 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 20 21:26:11.775920 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 20 21:26:11.775920 ignition[953]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 20 21:26:11.779400 ignition[953]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 21:26:11.781513 ignition[953]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 21:26:11.781513 ignition[953]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 20 21:26:11.781513 ignition[953]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 20 21:26:11.786088 ignition[953]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 21:26:11.788251 ignition[953]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 21:26:11.788251 ignition[953]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 20 21:26:11.791388 ignition[953]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 20 21:26:11.805324 ignition[953]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 21:26:11.809167 ignition[953]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 21:26:11.810808 ignition[953]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 20 21:26:11.810808 ignition[953]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 20 21:26:11.813559 ignition[953]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 20 21:26:11.814965 ignition[953]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 20 21:26:11.816723 ignition[953]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 20 21:26:11.818420 ignition[953]: INFO : files: files passed Mar 20 21:26:11.819148 ignition[953]: INFO : Ignition finished successfully Mar 20 21:26:11.822111 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 20 21:26:11.823535 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 20 21:26:11.825573 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 20 21:26:11.840546 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 20 21:26:11.841761 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Mar 20 21:26:11.846098 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:26:11.846098 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:26:11.841780 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 20 21:26:11.850585 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:26:11.852251 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 21:26:11.855294 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 20 21:26:11.858178 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 20 21:26:11.900345 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 20 21:26:11.900465 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 20 21:26:11.904070 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 20 21:26:11.906116 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 20 21:26:11.908171 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 20 21:26:11.910316 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 20 21:26:11.935021 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 21:26:11.938559 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 20 21:26:11.959236 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:26:11.961482 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:26:11.963829 systemd[1]: Stopped target timers.target - Timer Units. Mar 20 21:26:11.965617 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 20 21:26:11.966597 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 21:26:11.969070 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 20 21:26:11.971075 systemd[1]: Stopped target basic.target - Basic System. Mar 20 21:26:11.972899 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 20 21:26:11.975073 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 21:26:11.977325 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 20 21:26:11.979482 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 20 21:26:11.981485 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 21:26:11.983881 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 20 21:26:11.985903 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 20 21:26:11.987895 systemd[1]: Stopped target swap.target - Swaps. Mar 20 21:26:11.989558 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 20 21:26:11.990535 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 20 21:26:11.992787 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:26:11.994960 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:26:11.997334 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 20 21:26:11.998392 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:26:12.000993 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 20 21:26:12.002091 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 20 21:26:12.004539 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 20 21:26:12.005716 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 21:26:12.008287 systemd[1]: Stopped target paths.target - Path Units. Mar 20 21:26:12.010184 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 20 21:26:12.014708 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:26:12.015023 systemd[1]: Stopped target slices.target - Slice Units. Mar 20 21:26:12.017950 systemd[1]: Stopped target sockets.target - Socket Units. Mar 20 21:26:12.019972 systemd[1]: iscsid.socket: Deactivated successfully. Mar 20 21:26:12.020055 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 21:26:12.021692 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 20 21:26:12.021769 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 21:26:12.023540 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 20 21:26:12.023656 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 21:26:12.025832 systemd[1]: ignition-files.service: Deactivated successfully. Mar 20 21:26:12.025971 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 20 21:26:12.028992 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 20 21:26:12.030527 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 20 21:26:12.032226 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 20 21:26:12.032343 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:26:12.034051 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 20 21:26:12.034161 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 21:26:12.041875 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 20 21:26:12.041982 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 20 21:26:12.052953 ignition[1008]: INFO : Ignition 2.20.0 Mar 20 21:26:12.052953 ignition[1008]: INFO : Stage: umount Mar 20 21:26:12.054655 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:26:12.054655 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:26:12.054655 ignition[1008]: INFO : umount: umount passed Mar 20 21:26:12.054655 ignition[1008]: INFO : Ignition finished successfully Mar 20 21:26:12.056109 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 20 21:26:12.056244 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 20 21:26:12.058039 systemd[1]: Stopped target network.target - Network. Mar 20 21:26:12.059466 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 20 21:26:12.059522 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 20 21:26:12.061321 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 20 21:26:12.061370 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 20 21:26:12.063542 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 20 21:26:12.063588 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 20 21:26:12.065493 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 20 21:26:12.065541 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 20 21:26:12.067784 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 20 21:26:12.069699 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 20 21:26:12.072583 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 20 21:26:12.076580 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 20 21:26:12.076735 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 20 21:26:12.082136 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 20 21:26:12.082424 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 20 21:26:12.082564 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 20 21:26:12.084892 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 20 21:26:12.085777 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 20 21:26:12.085828 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:26:12.088213 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 20 21:26:12.089214 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 20 21:26:12.089267 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 21:26:12.091569 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 20 21:26:12.091617 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:26:12.093855 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 20 21:26:12.093904 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 20 21:26:12.095760 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 20 21:26:12.095806 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:26:12.098106 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:26:12.103809 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 20 21:26:12.103901 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 20 21:26:12.128544 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 20 21:26:12.128765 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:26:12.131251 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 20 21:26:12.131400 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 20 21:26:12.133893 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 20 21:26:12.133960 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 20 21:26:12.135977 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 20 21:26:12.136018 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:26:12.137948 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 20 21:26:12.138000 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 20 21:26:12.140226 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 20 21:26:12.140276 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 20 21:26:12.142152 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 21:26:12.142209 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:26:12.145425 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 20 21:26:12.147119 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 20 21:26:12.147194 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:26:12.150255 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 20 21:26:12.150320 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 21:26:12.152425 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 20 21:26:12.152486 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:26:12.154921 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 21:26:12.154981 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:26:12.159176 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 20 21:26:12.159257 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 20 21:26:12.168203 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 20 21:26:12.168322 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 20 21:26:12.296626 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 20 21:26:12.296776 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 20 21:26:12.297923 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 20 21:26:12.299941 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 20 21:26:12.300002 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 20 21:26:12.301064 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 20 21:26:12.322870 systemd[1]: Switching root. Mar 20 21:26:12.357018 systemd-journald[193]: Journal stopped Mar 20 21:26:13.641221 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Mar 20 21:26:13.641287 kernel: SELinux: policy capability network_peer_controls=1 Mar 20 21:26:13.641306 kernel: SELinux: policy capability open_perms=1 Mar 20 21:26:13.641318 kernel: SELinux: policy capability extended_socket_class=1 Mar 20 21:26:13.641329 kernel: SELinux: policy capability always_check_network=0 Mar 20 21:26:13.641340 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 20 21:26:13.641352 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 20 21:26:13.641369 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 20 21:26:13.641380 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 20 21:26:13.641392 kernel: audit: type=1403 audit(1742505972.868:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 20 21:26:13.641407 systemd[1]: Successfully loaded SELinux policy in 39.329ms. Mar 20 21:26:13.641431 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.816ms. Mar 20 21:26:13.641444 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 21:26:13.641457 systemd[1]: Detected virtualization kvm. Mar 20 21:26:13.641469 systemd[1]: Detected architecture x86-64. Mar 20 21:26:13.641483 systemd[1]: Detected first boot. Mar 20 21:26:13.641496 systemd[1]: Initializing machine ID from VM UUID. Mar 20 21:26:13.641513 zram_generator::config[1057]: No configuration found. Mar 20 21:26:13.641529 kernel: Guest personality initialized and is inactive Mar 20 21:26:13.641541 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 20 21:26:13.641552 kernel: Initialized host personality Mar 20 21:26:13.641563 kernel: NET: Registered PF_VSOCK protocol family Mar 20 21:26:13.641575 systemd[1]: Populated /etc with preset unit settings. Mar 20 21:26:13.641588 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 20 21:26:13.641600 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 20 21:26:13.641612 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 20 21:26:13.641642 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 20 21:26:13.641659 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 20 21:26:13.641676 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 20 21:26:13.641689 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 20 21:26:13.641701 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 20 21:26:13.641713 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 20 21:26:13.641725 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 20 21:26:13.641738 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 20 21:26:13.641750 systemd[1]: Created slice user.slice - User and Session Slice. Mar 20 21:26:13.641765 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:26:13.641779 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:26:13.641791 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 20 21:26:13.641803 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 20 21:26:13.641815 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 20 21:26:13.641828 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 21:26:13.641840 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 20 21:26:13.641853 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:26:13.641878 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 20 21:26:13.641892 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 20 21:26:13.641905 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 20 21:26:13.641917 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 20 21:26:13.641929 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:26:13.641942 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 21:26:13.641955 systemd[1]: Reached target slices.target - Slice Units. Mar 20 21:26:13.641967 systemd[1]: Reached target swap.target - Swaps. Mar 20 21:26:13.641979 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 20 21:26:13.641994 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 20 21:26:13.642006 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 20 21:26:13.642018 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:26:13.642031 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 21:26:13.642043 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:26:13.642057 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 20 21:26:13.642069 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 20 21:26:13.642081 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 20 21:26:13.642094 systemd[1]: Mounting media.mount - External Media Directory... Mar 20 21:26:13.642109 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:26:13.642121 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 20 21:26:13.642133 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 20 21:26:13.642154 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 20 21:26:13.642167 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 20 21:26:13.642181 systemd[1]: Reached target machines.target - Containers. Mar 20 21:26:13.642195 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 20 21:26:13.642210 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:26:13.642225 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 21:26:13.642237 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 20 21:26:13.642249 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:26:13.642261 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 21:26:13.642273 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:26:13.642286 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 20 21:26:13.642298 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:26:13.642311 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 20 21:26:13.642323 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 20 21:26:13.642358 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 20 21:26:13.642383 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 20 21:26:13.642397 systemd[1]: Stopped systemd-fsck-usr.service. Mar 20 21:26:13.642417 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:26:13.642430 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 21:26:13.642442 kernel: fuse: init (API version 7.39) Mar 20 21:26:13.642467 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 21:26:13.642479 kernel: loop: module loaded Mar 20 21:26:13.642492 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 20 21:26:13.642508 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 20 21:26:13.642521 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 20 21:26:13.642533 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 21:26:13.642545 systemd[1]: verity-setup.service: Deactivated successfully. Mar 20 21:26:13.642557 systemd[1]: Stopped verity-setup.service. Mar 20 21:26:13.642573 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:26:13.642585 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 20 21:26:13.642616 systemd-journald[1128]: Collecting audit messages is disabled. Mar 20 21:26:13.642697 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 20 21:26:13.642711 systemd[1]: Mounted media.mount - External Media Directory. Mar 20 21:26:13.642727 systemd-journald[1128]: Journal started Mar 20 21:26:13.642758 systemd-journald[1128]: Runtime Journal (/run/log/journal/c72abe040f7c4c129dc6fa0f88faa653) is 6M, max 48.3M, 42.3M free. Mar 20 21:26:13.417221 systemd[1]: Queued start job for default target multi-user.target. Mar 20 21:26:13.430501 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 20 21:26:13.430967 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 20 21:26:13.650385 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 21:26:13.650132 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 20 21:26:13.651533 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 20 21:26:13.652766 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 20 21:26:13.655988 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:26:13.657644 kernel: ACPI: bus type drm_connector registered Mar 20 21:26:13.658187 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 20 21:26:13.659700 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 20 21:26:13.659900 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 20 21:26:13.661359 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:26:13.661559 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:26:13.662968 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 21:26:13.663178 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 21:26:13.664525 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:26:13.664742 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:26:13.666231 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 20 21:26:13.666432 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 20 21:26:13.667935 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:26:13.668133 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:26:13.669531 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 21:26:13.670929 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 20 21:26:13.672451 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 20 21:26:13.677118 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 20 21:26:13.688819 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 20 21:26:13.691515 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 20 21:26:13.693658 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 20 21:26:13.694772 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 20 21:26:13.694799 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 21:26:13.696762 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 20 21:26:13.704741 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 20 21:26:13.707979 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 20 21:26:13.709467 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:26:13.711069 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 20 21:26:13.714769 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 20 21:26:13.716085 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 21:26:13.718905 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 20 21:26:13.720107 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 21:26:13.722110 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:26:13.726976 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 20 21:26:13.728552 systemd-journald[1128]: Time spent on flushing to /var/log/journal/c72abe040f7c4c129dc6fa0f88faa653 is 20.614ms for 968 entries. Mar 20 21:26:13.728552 systemd-journald[1128]: System Journal (/var/log/journal/c72abe040f7c4c129dc6fa0f88faa653) is 8M, max 195.6M, 187.6M free. Mar 20 21:26:13.756164 systemd-journald[1128]: Received client request to flush runtime journal. Mar 20 21:26:13.729454 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 20 21:26:13.745661 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 20 21:26:13.747059 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 20 21:26:13.750004 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 20 21:26:13.752011 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 20 21:26:13.757706 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:26:13.759692 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:26:13.762381 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 20 21:26:13.763855 kernel: loop0: detected capacity change from 0 to 151640 Mar 20 21:26:13.767046 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 20 21:26:13.771809 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 20 21:26:13.774485 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 20 21:26:13.780797 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Mar 20 21:26:13.780814 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Mar 20 21:26:13.790093 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 20 21:26:13.789381 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 21:26:13.793414 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 20 21:26:13.797293 udevadm[1191]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 20 21:26:13.811953 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 20 21:26:13.824720 kernel: loop1: detected capacity change from 0 to 210664 Mar 20 21:26:13.826406 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 20 21:26:13.829645 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 21:26:13.859919 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Mar 20 21:26:13.859937 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Mar 20 21:26:13.860657 kernel: loop2: detected capacity change from 0 to 109808 Mar 20 21:26:13.866817 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:26:13.892651 kernel: loop3: detected capacity change from 0 to 151640 Mar 20 21:26:13.909670 kernel: loop4: detected capacity change from 0 to 210664 Mar 20 21:26:13.921688 kernel: loop5: detected capacity change from 0 to 109808 Mar 20 21:26:13.933591 (sd-merge)[1204]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 20 21:26:13.934257 (sd-merge)[1204]: Merged extensions into '/usr'. Mar 20 21:26:13.938564 systemd[1]: Reload requested from client PID 1177 ('systemd-sysext') (unit systemd-sysext.service)... Mar 20 21:26:13.938579 systemd[1]: Reloading... Mar 20 21:26:14.001653 zram_generator::config[1235]: No configuration found. Mar 20 21:26:14.039637 ldconfig[1172]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 20 21:26:14.126001 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:26:14.189977 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 20 21:26:14.190569 systemd[1]: Reloading finished in 251 ms. Mar 20 21:26:14.216028 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 20 21:26:14.217533 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 20 21:26:14.233034 systemd[1]: Starting ensure-sysext.service... Mar 20 21:26:14.234816 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 21:26:14.349392 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 20 21:26:14.349696 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 20 21:26:14.350926 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 20 21:26:14.351325 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Mar 20 21:26:14.351435 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Mar 20 21:26:14.355658 systemd[1]: Reload requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... Mar 20 21:26:14.355679 systemd[1]: Reloading... Mar 20 21:26:14.356001 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 21:26:14.356013 systemd-tmpfiles[1270]: Skipping /boot Mar 20 21:26:14.369084 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 21:26:14.369099 systemd-tmpfiles[1270]: Skipping /boot Mar 20 21:26:14.408650 zram_generator::config[1302]: No configuration found. Mar 20 21:26:14.516377 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:26:14.580816 systemd[1]: Reloading finished in 224 ms. Mar 20 21:26:14.596467 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 20 21:26:14.610899 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:26:14.620430 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 21:26:14.622801 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 20 21:26:14.625196 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 20 21:26:14.633682 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 21:26:14.637909 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:26:14.640929 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 20 21:26:14.645197 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:26:14.645370 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:26:14.652689 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:26:14.657817 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:26:14.661662 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:26:14.662036 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:26:14.662142 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:26:14.664890 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 20 21:26:14.666449 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:26:14.668370 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 20 21:26:14.670190 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:26:14.670402 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:26:14.676237 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:26:14.676814 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:26:14.681794 augenrules[1367]: No rules Mar 20 21:26:14.680241 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:26:14.680536 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:26:14.683602 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 21:26:14.683880 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 21:26:14.690006 systemd-udevd[1343]: Using default interface naming scheme 'v255'. Mar 20 21:26:14.694761 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 20 21:26:14.701038 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:26:14.702616 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 21:26:14.703811 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:26:14.705033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:26:14.718818 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 21:26:14.721945 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:26:14.726213 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:26:14.727366 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:26:14.727465 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:26:14.731477 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 20 21:26:14.732732 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:26:14.733939 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:26:14.741452 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 20 21:26:14.745844 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:26:14.746109 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:26:14.752742 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 21:26:14.752956 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 21:26:14.755853 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:26:14.756094 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:26:14.759214 systemd[1]: Finished ensure-sysext.service. Mar 20 21:26:14.761149 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 20 21:26:14.769254 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1389) Mar 20 21:26:14.776803 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 20 21:26:14.783461 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:26:14.783756 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:26:14.790644 augenrules[1378]: /sbin/augenrules: No change Mar 20 21:26:14.791099 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 20 21:26:14.799019 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 21:26:14.802930 augenrules[1434]: No rules Mar 20 21:26:14.801131 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 21:26:14.801204 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 21:26:14.804010 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 20 21:26:14.805744 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 20 21:26:14.806164 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 21:26:14.806420 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 21:26:14.840534 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 21:26:14.844070 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 20 21:26:14.846773 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 20 21:26:14.850648 kernel: ACPI: button: Power Button [PWRF] Mar 20 21:26:14.865564 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 20 21:26:14.876430 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 20 21:26:14.876750 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 20 21:26:14.876933 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 20 21:26:14.877686 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 20 21:26:14.917618 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:26:14.945570 systemd-networkd[1432]: lo: Link UP Mar 20 21:26:14.951651 systemd-networkd[1432]: lo: Gained carrier Mar 20 21:26:14.953333 systemd-networkd[1432]: Enumeration completed Mar 20 21:26:14.953441 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 21:26:14.953705 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:26:14.953710 systemd-networkd[1432]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 21:26:14.955026 systemd-networkd[1432]: eth0: Link UP Mar 20 21:26:14.955086 systemd-networkd[1432]: eth0: Gained carrier Mar 20 21:26:14.955143 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:26:14.957417 systemd-resolved[1341]: Positive Trust Anchors: Mar 20 21:26:14.957772 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 20 21:26:14.957779 systemd-resolved[1341]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 21:26:14.957929 systemd-resolved[1341]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 21:26:14.962882 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 20 21:26:14.962998 systemd-resolved[1341]: Defaulting to hostname 'linux'. Mar 20 21:26:14.982735 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 21:26:14.985944 systemd[1]: Reached target network.target - Network. Mar 20 21:26:14.986860 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:26:14.992800 kernel: kvm_amd: TSC scaling supported Mar 20 21:26:14.992847 kernel: mousedev: PS/2 mouse device common for all mice Mar 20 21:26:14.992874 kernel: kvm_amd: Nested Virtualization enabled Mar 20 21:26:14.992827 systemd-networkd[1432]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 21:26:14.993652 kernel: kvm_amd: Nested Paging enabled Mar 20 21:26:14.994371 kernel: kvm_amd: LBR virtualization supported Mar 20 21:26:14.995316 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 20 21:26:14.995350 kernel: kvm_amd: Virtual GIF supported Mar 20 21:26:15.007839 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 20 21:26:15.542369 systemd-resolved[1341]: Clock change detected. Flushing caches. Mar 20 21:26:15.542409 systemd-timesyncd[1439]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 20 21:26:15.542450 systemd-timesyncd[1439]: Initial clock synchronization to Thu 2025-03-20 21:26:15.542326 UTC. Mar 20 21:26:15.543499 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 20 21:26:15.546722 systemd[1]: Reached target time-set.target - System Time Set. Mar 20 21:26:15.553289 kernel: EDAC MC: Ver: 3.0.0 Mar 20 21:26:15.585532 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 20 21:26:15.599558 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:26:15.602981 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 20 21:26:15.624800 lvm[1468]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 21:26:15.654515 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 20 21:26:15.655990 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:26:15.657116 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 21:26:15.658318 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 20 21:26:15.659576 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 20 21:26:15.661031 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 20 21:26:15.662224 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 20 21:26:15.663488 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 20 21:26:15.664725 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 20 21:26:15.664757 systemd[1]: Reached target paths.target - Path Units. Mar 20 21:26:15.665670 systemd[1]: Reached target timers.target - Timer Units. Mar 20 21:26:15.667474 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 20 21:26:15.670218 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 20 21:26:15.673543 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 20 21:26:15.674950 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 20 21:26:15.676212 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 20 21:26:15.681757 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 20 21:26:15.683154 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 20 21:26:15.685727 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 20 21:26:15.687354 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 20 21:26:15.688562 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 21:26:15.689552 systemd[1]: Reached target basic.target - Basic System. Mar 20 21:26:15.690535 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 20 21:26:15.690565 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 20 21:26:15.691513 systemd[1]: Starting containerd.service - containerd container runtime... Mar 20 21:26:15.693553 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 20 21:26:15.693661 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 21:26:15.695778 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 20 21:26:15.702255 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 20 21:26:15.702585 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 20 21:26:15.703847 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 20 21:26:15.706068 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 20 21:26:15.708230 jq[1475]: false Mar 20 21:26:15.709899 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 20 21:26:15.712594 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 20 21:26:15.717900 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 20 21:26:15.719926 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 20 21:26:15.720480 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 20 21:26:15.723375 systemd[1]: Starting update-engine.service - Update Engine... Mar 20 21:26:15.727206 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 20 21:26:15.731457 extend-filesystems[1476]: Found loop3 Mar 20 21:26:15.731457 extend-filesystems[1476]: Found loop4 Mar 20 21:26:15.731457 extend-filesystems[1476]: Found loop5 Mar 20 21:26:15.731457 extend-filesystems[1476]: Found sr0 Mar 20 21:26:15.731457 extend-filesystems[1476]: Found vda Mar 20 21:26:15.731457 extend-filesystems[1476]: Found vda1 Mar 20 21:26:15.731457 extend-filesystems[1476]: Found vda2 Mar 20 21:26:15.731457 extend-filesystems[1476]: Found vda3 Mar 20 21:26:15.731457 extend-filesystems[1476]: Found usr Mar 20 21:26:15.731457 extend-filesystems[1476]: Found vda4 Mar 20 21:26:15.731457 extend-filesystems[1476]: Found vda6 Mar 20 21:26:15.731457 extend-filesystems[1476]: Found vda7 Mar 20 21:26:15.731457 extend-filesystems[1476]: Found vda9 Mar 20 21:26:15.731457 extend-filesystems[1476]: Checking size of /dev/vda9 Mar 20 21:26:15.729675 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 20 21:26:15.763835 extend-filesystems[1476]: Resized partition /dev/vda9 Mar 20 21:26:15.739617 dbus-daemon[1474]: [system] SELinux support is enabled Mar 20 21:26:15.731877 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 20 21:26:15.767671 extend-filesystems[1499]: resize2fs 1.47.2 (1-Jan-2025) Mar 20 21:26:15.768981 jq[1490]: true Mar 20 21:26:15.733351 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 20 21:26:15.769476 update_engine[1485]: I20250320 21:26:15.752525 1485 main.cc:92] Flatcar Update Engine starting Mar 20 21:26:15.769476 update_engine[1485]: I20250320 21:26:15.758807 1485 update_check_scheduler.cc:74] Next update check in 8m57s Mar 20 21:26:15.738254 systemd[1]: motdgen.service: Deactivated successfully. Mar 20 21:26:15.738644 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 20 21:26:15.744039 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 20 21:26:15.770722 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 20 21:26:15.753619 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 20 21:26:15.753861 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 20 21:26:15.775761 jq[1500]: true Mar 20 21:26:15.776434 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 20 21:26:15.776468 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 20 21:26:15.779679 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 20 21:26:15.780296 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1383) Mar 20 21:26:15.779796 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 20 21:26:15.784512 (ntainerd)[1501]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 20 21:26:15.796568 tar[1498]: linux-amd64/helm Mar 20 21:26:15.801280 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 20 21:26:15.803221 systemd[1]: Started update-engine.service - Update Engine. Mar 20 21:26:15.807437 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 20 21:26:15.828315 systemd-logind[1483]: Watching system buttons on /dev/input/event1 (Power Button) Mar 20 21:26:15.829329 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 20 21:26:15.829749 systemd-logind[1483]: New seat seat0. Mar 20 21:26:15.830012 extend-filesystems[1499]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 20 21:26:15.830012 extend-filesystems[1499]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 20 21:26:15.830012 extend-filesystems[1499]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 20 21:26:15.851761 extend-filesystems[1476]: Resized filesystem in /dev/vda9 Mar 20 21:26:15.854196 bash[1528]: Updated "/home/core/.ssh/authorized_keys" Mar 20 21:26:15.833238 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 20 21:26:15.833523 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 20 21:26:15.849450 systemd[1]: Started systemd-logind.service - User Login Management. Mar 20 21:26:15.853121 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 20 21:26:15.858409 locksmithd[1514]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 20 21:26:15.858835 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 20 21:26:15.893546 sshd_keygen[1495]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 20 21:26:15.919073 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 20 21:26:15.923140 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 20 21:26:15.945341 systemd[1]: issuegen.service: Deactivated successfully. Mar 20 21:26:15.945763 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 20 21:26:15.949012 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 20 21:26:15.983971 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 20 21:26:15.987042 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 20 21:26:15.987159 containerd[1501]: time="2025-03-20T21:26:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 20 21:26:15.988459 containerd[1501]: time="2025-03-20T21:26:15.988382103Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 20 21:26:15.992556 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 20 21:26:15.993878 systemd[1]: Reached target getty.target - Login Prompts. Mar 20 21:26:15.998003 containerd[1501]: time="2025-03-20T21:26:15.997967135Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="5.692µs" Mar 20 21:26:15.998003 containerd[1501]: time="2025-03-20T21:26:15.997995498Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 20 21:26:15.998078 containerd[1501]: time="2025-03-20T21:26:15.998013111Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 20 21:26:15.998193 containerd[1501]: time="2025-03-20T21:26:15.998172791Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 20 21:26:15.998221 containerd[1501]: time="2025-03-20T21:26:15.998192097Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 20 21:26:15.998221 containerd[1501]: time="2025-03-20T21:26:15.998215791Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 21:26:15.998321 containerd[1501]: time="2025-03-20T21:26:15.998300169Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 21:26:15.998321 containerd[1501]: time="2025-03-20T21:26:15.998315929Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 21:26:15.998576 containerd[1501]: time="2025-03-20T21:26:15.998554556Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 21:26:15.998576 containerd[1501]: time="2025-03-20T21:26:15.998571528Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 21:26:15.998623 containerd[1501]: time="2025-03-20T21:26:15.998581828Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 21:26:15.998623 containerd[1501]: time="2025-03-20T21:26:15.998589993Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 20 21:26:15.998702 containerd[1501]: time="2025-03-20T21:26:15.998683328Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 20 21:26:15.998931 containerd[1501]: time="2025-03-20T21:26:15.998910794Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 21:26:15.998954 containerd[1501]: time="2025-03-20T21:26:15.998944377Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 21:26:15.998974 containerd[1501]: time="2025-03-20T21:26:15.998954406Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 20 21:26:15.998999 containerd[1501]: time="2025-03-20T21:26:15.998986797Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 20 21:26:15.999746 containerd[1501]: time="2025-03-20T21:26:15.999709442Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 20 21:26:15.999815 containerd[1501]: time="2025-03-20T21:26:15.999797437Z" level=info msg="metadata content store policy set" policy=shared Mar 20 21:26:16.005246 containerd[1501]: time="2025-03-20T21:26:16.005212480Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 20 21:26:16.005293 containerd[1501]: time="2025-03-20T21:26:16.005255982Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 20 21:26:16.005293 containerd[1501]: time="2025-03-20T21:26:16.005280548Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 20 21:26:16.005330 containerd[1501]: time="2025-03-20T21:26:16.005292560Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 20 21:26:16.005330 containerd[1501]: time="2025-03-20T21:26:16.005305304Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 20 21:26:16.005330 containerd[1501]: time="2025-03-20T21:26:16.005316525Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 20 21:26:16.005330 containerd[1501]: time="2025-03-20T21:26:16.005328868Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 20 21:26:16.005411 containerd[1501]: time="2025-03-20T21:26:16.005341241Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 20 21:26:16.005411 containerd[1501]: time="2025-03-20T21:26:16.005353354Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 20 21:26:16.005411 containerd[1501]: time="2025-03-20T21:26:16.005364966Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 20 21:26:16.005411 containerd[1501]: time="2025-03-20T21:26:16.005375075Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 20 21:26:16.005411 containerd[1501]: time="2025-03-20T21:26:16.005391997Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 20 21:26:16.005523 containerd[1501]: time="2025-03-20T21:26:16.005498166Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 20 21:26:16.005523 containerd[1501]: time="2025-03-20T21:26:16.005518995Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 20 21:26:16.005568 containerd[1501]: time="2025-03-20T21:26:16.005530637Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 20 21:26:16.005568 containerd[1501]: time="2025-03-20T21:26:16.005541086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 20 21:26:16.005568 containerd[1501]: time="2025-03-20T21:26:16.005551586Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 20 21:26:16.005568 containerd[1501]: time="2025-03-20T21:26:16.005561064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 20 21:26:16.005641 containerd[1501]: time="2025-03-20T21:26:16.005572555Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 20 21:26:16.005641 containerd[1501]: time="2025-03-20T21:26:16.005595919Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 20 21:26:16.005641 containerd[1501]: time="2025-03-20T21:26:16.005607120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 20 21:26:16.005641 containerd[1501]: time="2025-03-20T21:26:16.005617730Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 20 21:26:16.005641 containerd[1501]: time="2025-03-20T21:26:16.005627438Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 20 21:26:16.005747 containerd[1501]: time="2025-03-20T21:26:16.005688312Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 20 21:26:16.005747 containerd[1501]: time="2025-03-20T21:26:16.005700866Z" level=info msg="Start snapshots syncer" Mar 20 21:26:16.005747 containerd[1501]: time="2025-03-20T21:26:16.005735911Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 20 21:26:16.005992 containerd[1501]: time="2025-03-20T21:26:16.005953079Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 20 21:26:16.006097 containerd[1501]: time="2025-03-20T21:26:16.006000758Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 20 21:26:16.006097 containerd[1501]: time="2025-03-20T21:26:16.006063085Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 20 21:26:16.006182 containerd[1501]: time="2025-03-20T21:26:16.006161890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 20 21:26:16.006203 containerd[1501]: time="2025-03-20T21:26:16.006189642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 20 21:26:16.006222 containerd[1501]: time="2025-03-20T21:26:16.006200412Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 20 21:26:16.006222 containerd[1501]: time="2025-03-20T21:26:16.006211223Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 20 21:26:16.006271 containerd[1501]: time="2025-03-20T21:26:16.006225099Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 20 21:26:16.006271 containerd[1501]: time="2025-03-20T21:26:16.006236440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 20 21:26:16.006271 containerd[1501]: time="2025-03-20T21:26:16.006246679Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 20 21:26:16.006333 containerd[1501]: time="2025-03-20T21:26:16.006293156Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 20 21:26:16.006333 containerd[1501]: time="2025-03-20T21:26:16.006306231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 20 21:26:16.006333 containerd[1501]: time="2025-03-20T21:26:16.006315638Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 20 21:26:16.006386 containerd[1501]: time="2025-03-20T21:26:16.006349412Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 21:26:16.006386 containerd[1501]: time="2025-03-20T21:26:16.006361074Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 21:26:16.006386 containerd[1501]: time="2025-03-20T21:26:16.006369339Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 21:26:16.006386 containerd[1501]: time="2025-03-20T21:26:16.006378797Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 21:26:16.006516 containerd[1501]: time="2025-03-20T21:26:16.006387704Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 20 21:26:16.006516 containerd[1501]: time="2025-03-20T21:26:16.006406799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 20 21:26:16.006516 containerd[1501]: time="2025-03-20T21:26:16.006418181Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 20 21:26:16.006516 containerd[1501]: time="2025-03-20T21:26:16.006453367Z" level=info msg="runtime interface created" Mar 20 21:26:16.006516 containerd[1501]: time="2025-03-20T21:26:16.006459418Z" level=info msg="created NRI interface" Mar 20 21:26:16.006516 containerd[1501]: time="2025-03-20T21:26:16.006468365Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 20 21:26:16.006516 containerd[1501]: time="2025-03-20T21:26:16.006479155Z" level=info msg="Connect containerd service" Mar 20 21:26:16.006516 containerd[1501]: time="2025-03-20T21:26:16.006499934Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 20 21:26:16.007211 containerd[1501]: time="2025-03-20T21:26:16.007183195Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 21:26:16.084365 containerd[1501]: time="2025-03-20T21:26:16.084300510Z" level=info msg="Start subscribing containerd event" Mar 20 21:26:16.084486 containerd[1501]: time="2025-03-20T21:26:16.084381963Z" level=info msg="Start recovering state" Mar 20 21:26:16.084509 containerd[1501]: time="2025-03-20T21:26:16.084479215Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 20 21:26:16.084529 containerd[1501]: time="2025-03-20T21:26:16.084505815Z" level=info msg="Start event monitor" Mar 20 21:26:16.084529 containerd[1501]: time="2025-03-20T21:26:16.084524440Z" level=info msg="Start cni network conf syncer for default" Mar 20 21:26:16.084568 containerd[1501]: time="2025-03-20T21:26:16.084533507Z" level=info msg="Start streaming server" Mar 20 21:26:16.084568 containerd[1501]: time="2025-03-20T21:26:16.084542794Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 20 21:26:16.084624 containerd[1501]: time="2025-03-20T21:26:16.084543836Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 20 21:26:16.084624 containerd[1501]: time="2025-03-20T21:26:16.084610040Z" level=info msg="runtime interface starting up..." Mar 20 21:26:16.084624 containerd[1501]: time="2025-03-20T21:26:16.084616262Z" level=info msg="starting plugins..." Mar 20 21:26:16.084677 containerd[1501]: time="2025-03-20T21:26:16.084635057Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 20 21:26:16.084800 containerd[1501]: time="2025-03-20T21:26:16.084767215Z" level=info msg="containerd successfully booted in 0.098140s" Mar 20 21:26:16.084878 systemd[1]: Started containerd.service - containerd container runtime. Mar 20 21:26:16.184682 tar[1498]: linux-amd64/LICENSE Mar 20 21:26:16.184773 tar[1498]: linux-amd64/README.md Mar 20 21:26:16.205529 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 20 21:26:17.231416 systemd-networkd[1432]: eth0: Gained IPv6LL Mar 20 21:26:17.234612 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 20 21:26:17.236424 systemd[1]: Reached target network-online.target - Network is Online. Mar 20 21:26:17.239028 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 20 21:26:17.241417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:26:17.243612 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 20 21:26:17.266483 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 20 21:26:17.266779 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 20 21:26:17.268657 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 20 21:26:17.271006 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 20 21:26:17.867918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:26:17.869543 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 20 21:26:17.870962 systemd[1]: Startup finished in 698ms (kernel) + 6.178s (initrd) + 4.507s (userspace) = 11.384s. Mar 20 21:26:17.902696 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 21:26:18.336558 kubelet[1602]: E0320 21:26:18.336431 1602 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 21:26:18.340749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 21:26:18.340971 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 21:26:18.341401 systemd[1]: kubelet.service: Consumed 952ms CPU time, 245.2M memory peak. Mar 20 21:26:20.160874 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 20 21:26:20.162180 systemd[1]: Started sshd@0-10.0.0.79:22-10.0.0.1:36384.service - OpenSSH per-connection server daemon (10.0.0.1:36384). Mar 20 21:26:20.218825 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 36384 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:26:20.220489 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:26:20.231379 systemd-logind[1483]: New session 1 of user core. Mar 20 21:26:20.232649 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 20 21:26:20.233921 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 20 21:26:20.261317 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 20 21:26:20.263899 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 20 21:26:20.285753 (systemd)[1621]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 20 21:26:20.287857 systemd-logind[1483]: New session c1 of user core. Mar 20 21:26:20.434191 systemd[1621]: Queued start job for default target default.target. Mar 20 21:26:20.447525 systemd[1621]: Created slice app.slice - User Application Slice. Mar 20 21:26:20.447551 systemd[1621]: Reached target paths.target - Paths. Mar 20 21:26:20.447592 systemd[1621]: Reached target timers.target - Timers. Mar 20 21:26:20.449097 systemd[1621]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 20 21:26:20.460036 systemd[1621]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 20 21:26:20.460162 systemd[1621]: Reached target sockets.target - Sockets. Mar 20 21:26:20.460204 systemd[1621]: Reached target basic.target - Basic System. Mar 20 21:26:20.460246 systemd[1621]: Reached target default.target - Main User Target. Mar 20 21:26:20.460293 systemd[1621]: Startup finished in 166ms. Mar 20 21:26:20.460743 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 20 21:26:20.462389 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 20 21:26:20.524277 systemd[1]: Started sshd@1-10.0.0.79:22-10.0.0.1:36400.service - OpenSSH per-connection server daemon (10.0.0.1:36400). Mar 20 21:26:20.574883 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 36400 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:26:20.576459 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:26:20.580622 systemd-logind[1483]: New session 2 of user core. Mar 20 21:26:20.597407 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 20 21:26:20.650240 sshd[1634]: Connection closed by 10.0.0.1 port 36400 Mar 20 21:26:20.650555 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Mar 20 21:26:20.667026 systemd[1]: sshd@1-10.0.0.79:22-10.0.0.1:36400.service: Deactivated successfully. Mar 20 21:26:20.668927 systemd[1]: session-2.scope: Deactivated successfully. Mar 20 21:26:20.670234 systemd-logind[1483]: Session 2 logged out. Waiting for processes to exit. Mar 20 21:26:20.671476 systemd[1]: Started sshd@2-10.0.0.79:22-10.0.0.1:36412.service - OpenSSH per-connection server daemon (10.0.0.1:36412). Mar 20 21:26:20.672193 systemd-logind[1483]: Removed session 2. Mar 20 21:26:20.721358 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 36412 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:26:20.723172 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:26:20.727529 systemd-logind[1483]: New session 3 of user core. Mar 20 21:26:20.737381 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 20 21:26:20.786455 sshd[1642]: Connection closed by 10.0.0.1 port 36412 Mar 20 21:26:20.786840 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Mar 20 21:26:20.797902 systemd[1]: sshd@2-10.0.0.79:22-10.0.0.1:36412.service: Deactivated successfully. Mar 20 21:26:20.800108 systemd[1]: session-3.scope: Deactivated successfully. Mar 20 21:26:20.801614 systemd-logind[1483]: Session 3 logged out. Waiting for processes to exit. Mar 20 21:26:20.802893 systemd[1]: Started sshd@3-10.0.0.79:22-10.0.0.1:36424.service - OpenSSH per-connection server daemon (10.0.0.1:36424). Mar 20 21:26:20.803566 systemd-logind[1483]: Removed session 3. Mar 20 21:26:20.856448 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 36424 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:26:20.858003 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:26:20.862362 systemd-logind[1483]: New session 4 of user core. Mar 20 21:26:20.872383 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 20 21:26:20.925841 sshd[1651]: Connection closed by 10.0.0.1 port 36424 Mar 20 21:26:20.926242 sshd-session[1647]: pam_unix(sshd:session): session closed for user core Mar 20 21:26:20.943571 systemd[1]: sshd@3-10.0.0.79:22-10.0.0.1:36424.service: Deactivated successfully. Mar 20 21:26:20.946961 systemd[1]: session-4.scope: Deactivated successfully. Mar 20 21:26:20.949650 systemd-logind[1483]: Session 4 logged out. Waiting for processes to exit. Mar 20 21:26:20.951236 systemd[1]: Started sshd@4-10.0.0.79:22-10.0.0.1:36438.service - OpenSSH per-connection server daemon (10.0.0.1:36438). Mar 20 21:26:20.952323 systemd-logind[1483]: Removed session 4. Mar 20 21:26:21.009805 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 36438 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:26:21.011386 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:26:21.015620 systemd-logind[1483]: New session 5 of user core. Mar 20 21:26:21.026379 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 20 21:26:21.166997 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 20 21:26:21.167338 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:26:21.183434 sudo[1660]: pam_unix(sudo:session): session closed for user root Mar 20 21:26:21.184958 sshd[1659]: Connection closed by 10.0.0.1 port 36438 Mar 20 21:26:21.185308 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Mar 20 21:26:21.197891 systemd[1]: sshd@4-10.0.0.79:22-10.0.0.1:36438.service: Deactivated successfully. Mar 20 21:26:21.199613 systemd[1]: session-5.scope: Deactivated successfully. Mar 20 21:26:21.201231 systemd-logind[1483]: Session 5 logged out. Waiting for processes to exit. Mar 20 21:26:21.202569 systemd[1]: Started sshd@5-10.0.0.79:22-10.0.0.1:36450.service - OpenSSH per-connection server daemon (10.0.0.1:36450). Mar 20 21:26:21.203279 systemd-logind[1483]: Removed session 5. Mar 20 21:26:21.250528 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 36450 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:26:21.251990 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:26:21.256033 systemd-logind[1483]: New session 6 of user core. Mar 20 21:26:21.265365 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 20 21:26:21.319351 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 20 21:26:21.319674 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:26:21.323232 sudo[1670]: pam_unix(sudo:session): session closed for user root Mar 20 21:26:21.329796 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 20 21:26:21.330129 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:26:21.339588 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 21:26:21.386426 augenrules[1692]: No rules Mar 20 21:26:21.388225 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 21:26:21.388506 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 21:26:21.389632 sudo[1669]: pam_unix(sudo:session): session closed for user root Mar 20 21:26:21.391103 sshd[1668]: Connection closed by 10.0.0.1 port 36450 Mar 20 21:26:21.391413 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Mar 20 21:26:21.399839 systemd[1]: sshd@5-10.0.0.79:22-10.0.0.1:36450.service: Deactivated successfully. Mar 20 21:26:21.401758 systemd[1]: session-6.scope: Deactivated successfully. Mar 20 21:26:21.403088 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. Mar 20 21:26:21.404331 systemd[1]: Started sshd@6-10.0.0.79:22-10.0.0.1:36464.service - OpenSSH per-connection server daemon (10.0.0.1:36464). Mar 20 21:26:21.405038 systemd-logind[1483]: Removed session 6. Mar 20 21:26:21.455764 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 36464 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:26:21.457248 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:26:21.461381 systemd-logind[1483]: New session 7 of user core. Mar 20 21:26:21.478463 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 20 21:26:21.532246 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 20 21:26:21.532603 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:26:21.829080 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 20 21:26:21.846590 (dockerd)[1724]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 20 21:26:22.097974 dockerd[1724]: time="2025-03-20T21:26:22.097826109Z" level=info msg="Starting up" Mar 20 21:26:22.101192 dockerd[1724]: time="2025-03-20T21:26:22.101138770Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 20 21:26:22.483796 dockerd[1724]: time="2025-03-20T21:26:22.483636533Z" level=info msg="Loading containers: start." Mar 20 21:26:22.678300 kernel: Initializing XFRM netlink socket Mar 20 21:26:22.759330 systemd-networkd[1432]: docker0: Link UP Mar 20 21:26:22.857949 dockerd[1724]: time="2025-03-20T21:26:22.857888285Z" level=info msg="Loading containers: done." Mar 20 21:26:22.872062 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1985204417-merged.mount: Deactivated successfully. Mar 20 21:26:22.874349 dockerd[1724]: time="2025-03-20T21:26:22.874312982Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 20 21:26:22.874416 dockerd[1724]: time="2025-03-20T21:26:22.874395877Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 20 21:26:22.874516 dockerd[1724]: time="2025-03-20T21:26:22.874498459Z" level=info msg="Daemon has completed initialization" Mar 20 21:26:22.912382 dockerd[1724]: time="2025-03-20T21:26:22.912298795Z" level=info msg="API listen on /run/docker.sock" Mar 20 21:26:22.912476 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 20 21:26:23.633222 containerd[1501]: time="2025-03-20T21:26:23.633169568Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 20 21:26:24.324750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3182635095.mount: Deactivated successfully. Mar 20 21:26:25.615849 containerd[1501]: time="2025-03-20T21:26:25.615771105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:25.643783 containerd[1501]: time="2025-03-20T21:26:25.643688009Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32674573" Mar 20 21:26:25.682205 containerd[1501]: time="2025-03-20T21:26:25.682143463Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:25.722385 containerd[1501]: time="2025-03-20T21:26:25.722348838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:25.723146 containerd[1501]: time="2025-03-20T21:26:25.723105076Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 2.089886156s" Mar 20 21:26:25.723217 containerd[1501]: time="2025-03-20T21:26:25.723153116Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 20 21:26:25.740438 containerd[1501]: time="2025-03-20T21:26:25.740404072Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 20 21:26:27.877447 containerd[1501]: time="2025-03-20T21:26:27.877372787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:27.878422 containerd[1501]: time="2025-03-20T21:26:27.878331044Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29619772" Mar 20 21:26:27.879377 containerd[1501]: time="2025-03-20T21:26:27.879348001Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:27.881855 containerd[1501]: time="2025-03-20T21:26:27.881812792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:27.882741 containerd[1501]: time="2025-03-20T21:26:27.882700307Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 2.142257883s" Mar 20 21:26:27.882785 containerd[1501]: time="2025-03-20T21:26:27.882743578Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 20 21:26:27.903673 containerd[1501]: time="2025-03-20T21:26:27.903630210Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 20 21:26:28.591361 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 20 21:26:28.592978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:26:28.790390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:26:28.796204 (kubelet)[2025]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 21:26:28.973233 kubelet[2025]: E0320 21:26:28.973056 2025 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 21:26:28.980489 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 21:26:28.980786 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 21:26:28.981391 systemd[1]: kubelet.service: Consumed 348ms CPU time, 98.8M memory peak. Mar 20 21:26:29.180509 containerd[1501]: time="2025-03-20T21:26:29.180449055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:29.181354 containerd[1501]: time="2025-03-20T21:26:29.181243154Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17903309" Mar 20 21:26:29.182358 containerd[1501]: time="2025-03-20T21:26:29.182326165Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:29.184730 containerd[1501]: time="2025-03-20T21:26:29.184689646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:29.186032 containerd[1501]: time="2025-03-20T21:26:29.185965989Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 1.282287779s" Mar 20 21:26:29.186088 containerd[1501]: time="2025-03-20T21:26:29.186034899Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 20 21:26:29.205736 containerd[1501]: time="2025-03-20T21:26:29.205689010Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 20 21:26:30.810072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1282964564.mount: Deactivated successfully. Mar 20 21:26:32.757672 containerd[1501]: time="2025-03-20T21:26:32.757597447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:32.800687 containerd[1501]: time="2025-03-20T21:26:32.800598605Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185372" Mar 20 21:26:32.803803 containerd[1501]: time="2025-03-20T21:26:32.803763799Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:32.807566 containerd[1501]: time="2025-03-20T21:26:32.807509022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:32.807975 containerd[1501]: time="2025-03-20T21:26:32.807920092Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 3.602183153s" Mar 20 21:26:32.807975 containerd[1501]: time="2025-03-20T21:26:32.807964125Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 20 21:26:32.829049 containerd[1501]: time="2025-03-20T21:26:32.829009836Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 20 21:26:33.367768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount698345732.mount: Deactivated successfully. Mar 20 21:26:34.074753 containerd[1501]: time="2025-03-20T21:26:34.074694594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:34.075629 containerd[1501]: time="2025-03-20T21:26:34.075549978Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Mar 20 21:26:34.076772 containerd[1501]: time="2025-03-20T21:26:34.076724070Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:34.079115 containerd[1501]: time="2025-03-20T21:26:34.079083453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:34.079990 containerd[1501]: time="2025-03-20T21:26:34.079953344Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.250902021s" Mar 20 21:26:34.080030 containerd[1501]: time="2025-03-20T21:26:34.079988671Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 20 21:26:34.097370 containerd[1501]: time="2025-03-20T21:26:34.097334795Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 20 21:26:34.630158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3780136456.mount: Deactivated successfully. Mar 20 21:26:34.635282 containerd[1501]: time="2025-03-20T21:26:34.635210191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:34.636013 containerd[1501]: time="2025-03-20T21:26:34.635966709Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Mar 20 21:26:34.637103 containerd[1501]: time="2025-03-20T21:26:34.637066582Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:34.639155 containerd[1501]: time="2025-03-20T21:26:34.639112628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:34.639847 containerd[1501]: time="2025-03-20T21:26:34.639812741Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 542.443782ms" Mar 20 21:26:34.639893 containerd[1501]: time="2025-03-20T21:26:34.639846945Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 20 21:26:34.658830 containerd[1501]: time="2025-03-20T21:26:34.658790133Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 20 21:26:35.760575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4278770145.mount: Deactivated successfully. Mar 20 21:26:37.939322 containerd[1501]: time="2025-03-20T21:26:37.939244504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:37.940228 containerd[1501]: time="2025-03-20T21:26:37.940147067Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Mar 20 21:26:37.941387 containerd[1501]: time="2025-03-20T21:26:37.941342368Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:37.943799 containerd[1501]: time="2025-03-20T21:26:37.943749892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:26:37.944722 containerd[1501]: time="2025-03-20T21:26:37.944689664Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.285862201s" Mar 20 21:26:37.944722 containerd[1501]: time="2025-03-20T21:26:37.944717386Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 20 21:26:39.230991 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 20 21:26:39.232723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:26:39.413028 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:26:39.427657 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 21:26:39.467726 kubelet[2278]: E0320 21:26:39.467676 2278 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 21:26:39.471198 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 21:26:39.471399 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 21:26:39.471739 systemd[1]: kubelet.service: Consumed 197ms CPU time, 98.2M memory peak. Mar 20 21:26:39.873169 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:26:39.873346 systemd[1]: kubelet.service: Consumed 197ms CPU time, 98.2M memory peak. Mar 20 21:26:39.875615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:26:39.900353 systemd[1]: Reload requested from client PID 2295 ('systemctl') (unit session-7.scope)... Mar 20 21:26:39.900369 systemd[1]: Reloading... Mar 20 21:26:39.994299 zram_generator::config[2342]: No configuration found. Mar 20 21:26:40.440406 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:26:40.542056 systemd[1]: Reloading finished in 641 ms. Mar 20 21:26:40.605391 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:26:40.607999 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:26:40.609752 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 21:26:40.610012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:26:40.610046 systemd[1]: kubelet.service: Consumed 138ms CPU time, 83.6M memory peak. Mar 20 21:26:40.611573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:26:40.787232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:26:40.791271 (kubelet)[2388]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 21:26:40.828349 kubelet[2388]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:26:40.828349 kubelet[2388]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 21:26:40.828349 kubelet[2388]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:26:40.828698 kubelet[2388]: I0320 21:26:40.828375 2388 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 21:26:41.340329 kubelet[2388]: I0320 21:26:41.340289 2388 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 20 21:26:41.340329 kubelet[2388]: I0320 21:26:41.340321 2388 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 21:26:41.340549 kubelet[2388]: I0320 21:26:41.340534 2388 server.go:927] "Client rotation is on, will bootstrap in background" Mar 20 21:26:41.353764 kubelet[2388]: I0320 21:26:41.353724 2388 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 21:26:41.354487 kubelet[2388]: E0320 21:26:41.354459 2388 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.79:6443: connect: connection refused Mar 20 21:26:41.366077 kubelet[2388]: I0320 21:26:41.366048 2388 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 21:26:41.367748 kubelet[2388]: I0320 21:26:41.367705 2388 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 21:26:41.367895 kubelet[2388]: I0320 21:26:41.367739 2388 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 20 21:26:41.368306 kubelet[2388]: I0320 21:26:41.368282 2388 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 21:26:41.368306 kubelet[2388]: I0320 21:26:41.368297 2388 container_manager_linux.go:301] "Creating device plugin manager" Mar 20 21:26:41.368446 kubelet[2388]: I0320 21:26:41.368423 2388 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:26:41.369013 kubelet[2388]: I0320 21:26:41.368989 2388 kubelet.go:400] "Attempting to sync node with API server" Mar 20 21:26:41.369013 kubelet[2388]: I0320 21:26:41.369004 2388 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 21:26:41.369070 kubelet[2388]: I0320 21:26:41.369023 2388 kubelet.go:312] "Adding apiserver pod source" Mar 20 21:26:41.369070 kubelet[2388]: I0320 21:26:41.369045 2388 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 21:26:41.370855 kubelet[2388]: W0320 21:26:41.370664 2388 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Mar 20 21:26:41.370855 kubelet[2388]: E0320 21:26:41.370734 2388 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Mar 20 21:26:41.371924 kubelet[2388]: W0320 21:26:41.371875 2388 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Mar 20 21:26:41.371924 kubelet[2388]: E0320 21:26:41.371919 2388 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Mar 20 21:26:41.373862 kubelet[2388]: I0320 21:26:41.373832 2388 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 21:26:41.375048 kubelet[2388]: I0320 21:26:41.375013 2388 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 21:26:41.375123 kubelet[2388]: W0320 21:26:41.375103 2388 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 20 21:26:41.375884 kubelet[2388]: I0320 21:26:41.375797 2388 server.go:1264] "Started kubelet" Mar 20 21:26:41.376704 kubelet[2388]: I0320 21:26:41.376553 2388 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 21:26:41.377287 kubelet[2388]: I0320 21:26:41.377140 2388 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 21:26:41.378157 kubelet[2388]: I0320 21:26:41.377430 2388 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 21:26:41.378157 kubelet[2388]: I0320 21:26:41.377470 2388 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 21:26:41.378582 kubelet[2388]: I0320 21:26:41.378548 2388 server.go:455] "Adding debug handlers to kubelet server" Mar 20 21:26:41.380112 kubelet[2388]: E0320 21:26:41.380091 2388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:26:41.380682 kubelet[2388]: I0320 21:26:41.380137 2388 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 20 21:26:41.381060 kubelet[2388]: I0320 21:26:41.381031 2388 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 20 21:26:41.381839 kubelet[2388]: I0320 21:26:41.381230 2388 reconciler.go:26] "Reconciler: start to sync state" Mar 20 21:26:41.383216 kubelet[2388]: W0320 21:26:41.383170 2388 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Mar 20 21:26:41.383357 kubelet[2388]: E0320 21:26:41.383342 2388 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Mar 20 21:26:41.383995 kubelet[2388]: E0320 21:26:41.383857 2388 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.79:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.79:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182e9ff9defd3f61 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-20 21:26:41.375772513 +0000 UTC m=+0.580684273,LastTimestamp:2025-03-20 21:26:41.375772513 +0000 UTC m=+0.580684273,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 20 21:26:41.384183 kubelet[2388]: E0320 21:26:41.384145 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="200ms" Mar 20 21:26:41.384581 kubelet[2388]: I0320 21:26:41.384565 2388 factory.go:221] Registration of the systemd container factory successfully Mar 20 21:26:41.384944 kubelet[2388]: E0320 21:26:41.384919 2388 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 20 21:26:41.385308 kubelet[2388]: I0320 21:26:41.385168 2388 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 21:26:41.387072 kubelet[2388]: I0320 21:26:41.386809 2388 factory.go:221] Registration of the containerd container factory successfully Mar 20 21:26:41.394966 kubelet[2388]: I0320 21:26:41.394777 2388 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 21:26:41.396148 kubelet[2388]: I0320 21:26:41.396043 2388 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 21:26:41.397359 kubelet[2388]: I0320 21:26:41.396754 2388 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 21:26:41.397359 kubelet[2388]: I0320 21:26:41.396778 2388 kubelet.go:2337] "Starting kubelet main sync loop" Mar 20 21:26:41.397359 kubelet[2388]: E0320 21:26:41.396814 2388 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 21:26:41.397459 kubelet[2388]: W0320 21:26:41.397421 2388 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Mar 20 21:26:41.397483 kubelet[2388]: E0320 21:26:41.397473 2388 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Mar 20 21:26:41.398002 kubelet[2388]: I0320 21:26:41.397987 2388 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 20 21:26:41.398002 kubelet[2388]: I0320 21:26:41.397999 2388 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 20 21:26:41.398090 kubelet[2388]: I0320 21:26:41.398014 2388 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:26:41.482504 kubelet[2388]: I0320 21:26:41.482485 2388 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 20 21:26:41.482779 kubelet[2388]: E0320 21:26:41.482747 2388 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Mar 20 21:26:41.497052 kubelet[2388]: E0320 21:26:41.497018 2388 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 20 21:26:41.584849 kubelet[2388]: E0320 21:26:41.584801 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="400ms" Mar 20 21:26:41.668251 kubelet[2388]: I0320 21:26:41.668161 2388 policy_none.go:49] "None policy: Start" Mar 20 21:26:41.668791 kubelet[2388]: I0320 21:26:41.668765 2388 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 21:26:41.668893 kubelet[2388]: I0320 21:26:41.668855 2388 state_mem.go:35] "Initializing new in-memory state store" Mar 20 21:26:41.679538 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 20 21:26:41.684127 kubelet[2388]: I0320 21:26:41.684104 2388 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 20 21:26:41.684488 kubelet[2388]: E0320 21:26:41.684461 2388 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Mar 20 21:26:41.691211 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 20 21:26:41.694125 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 20 21:26:41.697957 kubelet[2388]: E0320 21:26:41.697934 2388 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 20 21:26:41.706140 kubelet[2388]: I0320 21:26:41.706078 2388 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 21:26:41.706379 kubelet[2388]: I0320 21:26:41.706336 2388 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 21:26:41.706533 kubelet[2388]: I0320 21:26:41.706470 2388 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 21:26:41.707413 kubelet[2388]: E0320 21:26:41.707386 2388 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 20 21:26:41.985720 kubelet[2388]: E0320 21:26:41.985614 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="800ms" Mar 20 21:26:42.085911 kubelet[2388]: I0320 21:26:42.085892 2388 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 20 21:26:42.086152 kubelet[2388]: E0320 21:26:42.086128 2388 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Mar 20 21:26:42.098330 kubelet[2388]: I0320 21:26:42.098298 2388 topology_manager.go:215] "Topology Admit Handler" podUID="d8b663cc9535f2d4b63e3b0e74d26e3c" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 20 21:26:42.098936 kubelet[2388]: I0320 21:26:42.098904 2388 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 20 21:26:42.099580 kubelet[2388]: I0320 21:26:42.099556 2388 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 20 21:26:42.104904 systemd[1]: Created slice kubepods-burstable-podd8b663cc9535f2d4b63e3b0e74d26e3c.slice - libcontainer container kubepods-burstable-podd8b663cc9535f2d4b63e3b0e74d26e3c.slice. Mar 20 21:26:42.126022 systemd[1]: Created slice kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice - libcontainer container kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice. Mar 20 21:26:42.129200 systemd[1]: Created slice kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice - libcontainer container kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice. Mar 20 21:26:42.185916 kubelet[2388]: I0320 21:26:42.185898 2388 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d8b663cc9535f2d4b63e3b0e74d26e3c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d8b663cc9535f2d4b63e3b0e74d26e3c\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:26:42.185985 kubelet[2388]: I0320 21:26:42.185934 2388 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:26:42.185985 kubelet[2388]: I0320 21:26:42.185959 2388 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:26:42.186044 kubelet[2388]: I0320 21:26:42.186029 2388 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 20 21:26:42.186073 kubelet[2388]: I0320 21:26:42.186063 2388 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d8b663cc9535f2d4b63e3b0e74d26e3c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d8b663cc9535f2d4b63e3b0e74d26e3c\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:26:42.186095 kubelet[2388]: I0320 21:26:42.186083 2388 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:26:42.186120 kubelet[2388]: I0320 21:26:42.186100 2388 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:26:42.186120 kubelet[2388]: I0320 21:26:42.186116 2388 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:26:42.186167 kubelet[2388]: I0320 21:26:42.186129 2388 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8b663cc9535f2d4b63e3b0e74d26e3c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d8b663cc9535f2d4b63e3b0e74d26e3c\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:26:42.186232 kubelet[2388]: W0320 21:26:42.186146 2388 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Mar 20 21:26:42.186274 kubelet[2388]: E0320 21:26:42.186253 2388 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Mar 20 21:26:42.423889 containerd[1501]: time="2025-03-20T21:26:42.423790557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d8b663cc9535f2d4b63e3b0e74d26e3c,Namespace:kube-system,Attempt:0,}" Mar 20 21:26:42.428325 containerd[1501]: time="2025-03-20T21:26:42.428287649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 20 21:26:42.431916 containerd[1501]: time="2025-03-20T21:26:42.431886497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 20 21:26:42.564746 kubelet[2388]: W0320 21:26:42.564693 2388 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Mar 20 21:26:42.564746 kubelet[2388]: E0320 21:26:42.564742 2388 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Mar 20 21:26:42.739276 kubelet[2388]: W0320 21:26:42.739139 2388 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Mar 20 21:26:42.739276 kubelet[2388]: E0320 21:26:42.739210 2388 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Mar 20 21:26:42.786657 kubelet[2388]: E0320 21:26:42.786622 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="1.6s" Mar 20 21:26:42.882784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1331461051.mount: Deactivated successfully. Mar 20 21:26:42.887522 containerd[1501]: time="2025-03-20T21:26:42.887475977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:26:42.888226 kubelet[2388]: I0320 21:26:42.888188 2388 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 20 21:26:42.888556 kubelet[2388]: E0320 21:26:42.888531 2388 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Mar 20 21:26:42.890549 containerd[1501]: time="2025-03-20T21:26:42.890484007Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 20 21:26:42.891551 containerd[1501]: time="2025-03-20T21:26:42.891519679Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:26:42.893434 containerd[1501]: time="2025-03-20T21:26:42.893404473Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:26:42.894250 containerd[1501]: time="2025-03-20T21:26:42.894195847Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 20 21:26:42.895275 containerd[1501]: time="2025-03-20T21:26:42.895222422Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:26:42.896026 containerd[1501]: time="2025-03-20T21:26:42.895972268Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 20 21:26:42.897924 containerd[1501]: time="2025-03-20T21:26:42.897883832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:26:42.899246 containerd[1501]: time="2025-03-20T21:26:42.899212242Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 463.135268ms" Mar 20 21:26:42.899972 containerd[1501]: time="2025-03-20T21:26:42.899934226Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 469.631559ms" Mar 20 21:26:42.900634 containerd[1501]: time="2025-03-20T21:26:42.900583504Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 474.166742ms" Mar 20 21:26:42.931303 containerd[1501]: time="2025-03-20T21:26:42.930691847Z" level=info msg="connecting to shim d3cea4cad1ed883972dd6eb6d79072bce9330c1425b970352db6f1ba33db2feb" address="unix:///run/containerd/s/2082cab36bfb9dc2b6042403722ea439bca8e8d88f4f8a9260d30aee4df459de" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:26:42.932861 kubelet[2388]: W0320 21:26:42.932616 2388 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Mar 20 21:26:42.932861 kubelet[2388]: E0320 21:26:42.932659 2388 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Mar 20 21:26:42.934433 containerd[1501]: time="2025-03-20T21:26:42.934368000Z" level=info msg="connecting to shim 76d3b663d941c020ea2e82309a1f6c1ff9b606325c1fda0ae03c78138249c1c3" address="unix:///run/containerd/s/7a9e014b6e169f8cba61923277cddb1b5774e212c4b2e1b8dccf73ffddf521f6" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:26:42.938440 containerd[1501]: time="2025-03-20T21:26:42.938397114Z" level=info msg="connecting to shim ef1370792c7e10e67f19e48ff7f070a3b592335aacf3911be70008c57ead198e" address="unix:///run/containerd/s/79b649a697ccdfafa4e440c1ec93555c540aa280eaed179b61b3f9ba6977c8f2" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:26:42.959402 systemd[1]: Started cri-containerd-76d3b663d941c020ea2e82309a1f6c1ff9b606325c1fda0ae03c78138249c1c3.scope - libcontainer container 76d3b663d941c020ea2e82309a1f6c1ff9b606325c1fda0ae03c78138249c1c3. Mar 20 21:26:42.963334 systemd[1]: Started cri-containerd-d3cea4cad1ed883972dd6eb6d79072bce9330c1425b970352db6f1ba33db2feb.scope - libcontainer container d3cea4cad1ed883972dd6eb6d79072bce9330c1425b970352db6f1ba33db2feb. Mar 20 21:26:42.964975 systemd[1]: Started cri-containerd-ef1370792c7e10e67f19e48ff7f070a3b592335aacf3911be70008c57ead198e.scope - libcontainer container ef1370792c7e10e67f19e48ff7f070a3b592335aacf3911be70008c57ead198e. Mar 20 21:26:42.988605 kubelet[2388]: E0320 21:26:42.988438 2388 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.79:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.79:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182e9ff9defd3f61 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-20 21:26:41.375772513 +0000 UTC m=+0.580684273,LastTimestamp:2025-03-20 21:26:41.375772513 +0000 UTC m=+0.580684273,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 20 21:26:43.011393 containerd[1501]: time="2025-03-20T21:26:43.011279819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"76d3b663d941c020ea2e82309a1f6c1ff9b606325c1fda0ae03c78138249c1c3\"" Mar 20 21:26:43.012224 containerd[1501]: time="2025-03-20T21:26:43.012184165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3cea4cad1ed883972dd6eb6d79072bce9330c1425b970352db6f1ba33db2feb\"" Mar 20 21:26:43.013874 containerd[1501]: time="2025-03-20T21:26:43.013844859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d8b663cc9535f2d4b63e3b0e74d26e3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef1370792c7e10e67f19e48ff7f070a3b592335aacf3911be70008c57ead198e\"" Mar 20 21:26:43.015744 containerd[1501]: time="2025-03-20T21:26:43.015678036Z" level=info msg="CreateContainer within sandbox \"d3cea4cad1ed883972dd6eb6d79072bce9330c1425b970352db6f1ba33db2feb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 20 21:26:43.015836 containerd[1501]: time="2025-03-20T21:26:43.015819090Z" level=info msg="CreateContainer within sandbox \"76d3b663d941c020ea2e82309a1f6c1ff9b606325c1fda0ae03c78138249c1c3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 20 21:26:43.016499 containerd[1501]: time="2025-03-20T21:26:43.016481973Z" level=info msg="CreateContainer within sandbox \"ef1370792c7e10e67f19e48ff7f070a3b592335aacf3911be70008c57ead198e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 20 21:26:43.029549 containerd[1501]: time="2025-03-20T21:26:43.029480245Z" level=info msg="Container 6e3ef11d6ecd7cc3b9eba271c5fb026fbda260627710c668a8ae3b1793b9ca0b: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:26:43.031067 containerd[1501]: time="2025-03-20T21:26:43.031043195Z" level=info msg="Container 3c4cdc5edfa5889ed78e567f846e7eee59092a134304866094a04b52544891f4: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:26:43.033253 containerd[1501]: time="2025-03-20T21:26:43.033230096Z" level=info msg="Container d9e6a49f7e03c26c1b887078ffffaa8e8f1e5192391adfc209044e71c6c86370: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:26:43.037524 containerd[1501]: time="2025-03-20T21:26:43.037498629Z" level=info msg="CreateContainer within sandbox \"d3cea4cad1ed883972dd6eb6d79072bce9330c1425b970352db6f1ba33db2feb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6e3ef11d6ecd7cc3b9eba271c5fb026fbda260627710c668a8ae3b1793b9ca0b\"" Mar 20 21:26:43.038072 containerd[1501]: time="2025-03-20T21:26:43.038042770Z" level=info msg="StartContainer for \"6e3ef11d6ecd7cc3b9eba271c5fb026fbda260627710c668a8ae3b1793b9ca0b\"" Mar 20 21:26:43.039001 containerd[1501]: time="2025-03-20T21:26:43.038968295Z" level=info msg="connecting to shim 6e3ef11d6ecd7cc3b9eba271c5fb026fbda260627710c668a8ae3b1793b9ca0b" address="unix:///run/containerd/s/2082cab36bfb9dc2b6042403722ea439bca8e8d88f4f8a9260d30aee4df459de" protocol=ttrpc version=3 Mar 20 21:26:43.040709 containerd[1501]: time="2025-03-20T21:26:43.040681477Z" level=info msg="CreateContainer within sandbox \"ef1370792c7e10e67f19e48ff7f070a3b592335aacf3911be70008c57ead198e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3c4cdc5edfa5889ed78e567f846e7eee59092a134304866094a04b52544891f4\"" Mar 20 21:26:43.041020 containerd[1501]: time="2025-03-20T21:26:43.040965710Z" level=info msg="StartContainer for \"3c4cdc5edfa5889ed78e567f846e7eee59092a134304866094a04b52544891f4\"" Mar 20 21:26:43.041961 containerd[1501]: time="2025-03-20T21:26:43.041933274Z" level=info msg="connecting to shim 3c4cdc5edfa5889ed78e567f846e7eee59092a134304866094a04b52544891f4" address="unix:///run/containerd/s/79b649a697ccdfafa4e440c1ec93555c540aa280eaed179b61b3f9ba6977c8f2" protocol=ttrpc version=3 Mar 20 21:26:43.044282 containerd[1501]: time="2025-03-20T21:26:43.044243055Z" level=info msg="CreateContainer within sandbox \"76d3b663d941c020ea2e82309a1f6c1ff9b606325c1fda0ae03c78138249c1c3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d9e6a49f7e03c26c1b887078ffffaa8e8f1e5192391adfc209044e71c6c86370\"" Mar 20 21:26:43.046083 containerd[1501]: time="2025-03-20T21:26:43.046040245Z" level=info msg="StartContainer for \"d9e6a49f7e03c26c1b887078ffffaa8e8f1e5192391adfc209044e71c6c86370\"" Mar 20 21:26:43.051027 containerd[1501]: time="2025-03-20T21:26:43.050344095Z" level=info msg="connecting to shim d9e6a49f7e03c26c1b887078ffffaa8e8f1e5192391adfc209044e71c6c86370" address="unix:///run/containerd/s/7a9e014b6e169f8cba61923277cddb1b5774e212c4b2e1b8dccf73ffddf521f6" protocol=ttrpc version=3 Mar 20 21:26:43.059462 systemd[1]: Started cri-containerd-6e3ef11d6ecd7cc3b9eba271c5fb026fbda260627710c668a8ae3b1793b9ca0b.scope - libcontainer container 6e3ef11d6ecd7cc3b9eba271c5fb026fbda260627710c668a8ae3b1793b9ca0b. Mar 20 21:26:43.062712 systemd[1]: Started cri-containerd-3c4cdc5edfa5889ed78e567f846e7eee59092a134304866094a04b52544891f4.scope - libcontainer container 3c4cdc5edfa5889ed78e567f846e7eee59092a134304866094a04b52544891f4. Mar 20 21:26:43.067890 systemd[1]: Started cri-containerd-d9e6a49f7e03c26c1b887078ffffaa8e8f1e5192391adfc209044e71c6c86370.scope - libcontainer container d9e6a49f7e03c26c1b887078ffffaa8e8f1e5192391adfc209044e71c6c86370. Mar 20 21:26:43.110489 containerd[1501]: time="2025-03-20T21:26:43.110438622Z" level=info msg="StartContainer for \"6e3ef11d6ecd7cc3b9eba271c5fb026fbda260627710c668a8ae3b1793b9ca0b\" returns successfully" Mar 20 21:26:43.116599 containerd[1501]: time="2025-03-20T21:26:43.116552455Z" level=info msg="StartContainer for \"3c4cdc5edfa5889ed78e567f846e7eee59092a134304866094a04b52544891f4\" returns successfully" Mar 20 21:26:43.132548 containerd[1501]: time="2025-03-20T21:26:43.132379090Z" level=info msg="StartContainer for \"d9e6a49f7e03c26c1b887078ffffaa8e8f1e5192391adfc209044e71c6c86370\" returns successfully" Mar 20 21:26:44.245112 kubelet[2388]: E0320 21:26:44.245066 2388 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 20 21:26:44.389961 kubelet[2388]: E0320 21:26:44.389907 2388 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 20 21:26:44.490394 kubelet[2388]: I0320 21:26:44.490363 2388 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 20 21:26:44.497670 kubelet[2388]: I0320 21:26:44.497590 2388 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 20 21:26:44.502624 kubelet[2388]: E0320 21:26:44.502596 2388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:26:44.603276 kubelet[2388]: E0320 21:26:44.603222 2388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:26:44.703350 kubelet[2388]: E0320 21:26:44.703308 2388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:26:44.804115 kubelet[2388]: E0320 21:26:44.804014 2388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:26:45.373977 kubelet[2388]: I0320 21:26:45.373913 2388 apiserver.go:52] "Watching apiserver" Mar 20 21:26:45.381769 kubelet[2388]: I0320 21:26:45.381742 2388 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 20 21:26:45.729961 systemd[1]: Reload requested from client PID 2663 ('systemctl') (unit session-7.scope)... Mar 20 21:26:45.729976 systemd[1]: Reloading... Mar 20 21:26:45.820295 zram_generator::config[2707]: No configuration found. Mar 20 21:26:46.109676 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:26:46.224704 systemd[1]: Reloading finished in 494 ms. Mar 20 21:26:46.254031 kubelet[2388]: I0320 21:26:46.253967 2388 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 21:26:46.254158 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:26:46.273727 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 21:26:46.274042 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:26:46.274097 systemd[1]: kubelet.service: Consumed 1.009s CPU time, 116.5M memory peak. Mar 20 21:26:46.276012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:26:46.454525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:26:46.464565 (kubelet)[2752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 21:26:46.611500 kubelet[2752]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:26:46.611500 kubelet[2752]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 21:26:46.611500 kubelet[2752]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:26:46.611863 kubelet[2752]: I0320 21:26:46.611547 2752 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 21:26:46.615972 kubelet[2752]: I0320 21:26:46.615947 2752 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 20 21:26:46.615972 kubelet[2752]: I0320 21:26:46.615969 2752 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 21:26:46.616324 kubelet[2752]: I0320 21:26:46.616309 2752 server.go:927] "Client rotation is on, will bootstrap in background" Mar 20 21:26:46.617448 kubelet[2752]: I0320 21:26:46.617431 2752 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 20 21:26:46.618608 kubelet[2752]: I0320 21:26:46.618582 2752 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 21:26:46.627210 kubelet[2752]: I0320 21:26:46.627179 2752 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 21:26:46.627466 kubelet[2752]: I0320 21:26:46.627427 2752 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 21:26:46.627613 kubelet[2752]: I0320 21:26:46.627455 2752 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 20 21:26:46.627697 kubelet[2752]: I0320 21:26:46.627624 2752 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 21:26:46.627697 kubelet[2752]: I0320 21:26:46.627634 2752 container_manager_linux.go:301] "Creating device plugin manager" Mar 20 21:26:46.627697 kubelet[2752]: I0320 21:26:46.627673 2752 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:26:46.627770 kubelet[2752]: I0320 21:26:46.627757 2752 kubelet.go:400] "Attempting to sync node with API server" Mar 20 21:26:46.627770 kubelet[2752]: I0320 21:26:46.627768 2752 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 21:26:46.627814 kubelet[2752]: I0320 21:26:46.627788 2752 kubelet.go:312] "Adding apiserver pod source" Mar 20 21:26:46.627814 kubelet[2752]: I0320 21:26:46.627806 2752 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 21:26:46.629689 kubelet[2752]: I0320 21:26:46.628250 2752 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 21:26:46.629689 kubelet[2752]: I0320 21:26:46.628481 2752 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 21:26:46.629689 kubelet[2752]: I0320 21:26:46.628863 2752 server.go:1264] "Started kubelet" Mar 20 21:26:46.629689 kubelet[2752]: I0320 21:26:46.629454 2752 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 21:26:46.629689 kubelet[2752]: I0320 21:26:46.629651 2752 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 21:26:46.629689 kubelet[2752]: I0320 21:26:46.629644 2752 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 21:26:46.630100 kubelet[2752]: I0320 21:26:46.630074 2752 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 21:26:46.631584 kubelet[2752]: I0320 21:26:46.631565 2752 server.go:455] "Adding debug handlers to kubelet server" Mar 20 21:26:46.634890 kubelet[2752]: I0320 21:26:46.634858 2752 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 20 21:26:46.635573 kubelet[2752]: I0320 21:26:46.635360 2752 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 20 21:26:46.636042 kubelet[2752]: I0320 21:26:46.635999 2752 reconciler.go:26] "Reconciler: start to sync state" Mar 20 21:26:46.638900 kubelet[2752]: I0320 21:26:46.638859 2752 factory.go:221] Registration of the systemd container factory successfully Mar 20 21:26:46.638981 kubelet[2752]: I0320 21:26:46.638939 2752 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 21:26:46.641620 kubelet[2752]: I0320 21:26:46.641304 2752 factory.go:221] Registration of the containerd container factory successfully Mar 20 21:26:46.643217 kubelet[2752]: E0320 21:26:46.643188 2752 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 20 21:26:46.645070 kubelet[2752]: I0320 21:26:46.645021 2752 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 21:26:46.646530 kubelet[2752]: I0320 21:26:46.646503 2752 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 21:26:46.646598 kubelet[2752]: I0320 21:26:46.646535 2752 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 21:26:46.646598 kubelet[2752]: I0320 21:26:46.646550 2752 kubelet.go:2337] "Starting kubelet main sync loop" Mar 20 21:26:46.646598 kubelet[2752]: E0320 21:26:46.646586 2752 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 21:26:46.676314 kubelet[2752]: I0320 21:26:46.674975 2752 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 20 21:26:46.676314 kubelet[2752]: I0320 21:26:46.674992 2752 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 20 21:26:46.676314 kubelet[2752]: I0320 21:26:46.675011 2752 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:26:46.676314 kubelet[2752]: I0320 21:26:46.675178 2752 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 20 21:26:46.676314 kubelet[2752]: I0320 21:26:46.675188 2752 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 20 21:26:46.676314 kubelet[2752]: I0320 21:26:46.675206 2752 policy_none.go:49] "None policy: Start" Mar 20 21:26:46.676314 kubelet[2752]: I0320 21:26:46.675695 2752 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 21:26:46.676314 kubelet[2752]: I0320 21:26:46.675711 2752 state_mem.go:35] "Initializing new in-memory state store" Mar 20 21:26:46.676314 kubelet[2752]: I0320 21:26:46.675937 2752 state_mem.go:75] "Updated machine memory state" Mar 20 21:26:46.680293 kubelet[2752]: I0320 21:26:46.680271 2752 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 21:26:46.680472 kubelet[2752]: I0320 21:26:46.680436 2752 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 21:26:46.680538 kubelet[2752]: I0320 21:26:46.680525 2752 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 21:26:46.736556 kubelet[2752]: I0320 21:26:46.736444 2752 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 20 21:26:46.747721 kubelet[2752]: I0320 21:26:46.747678 2752 topology_manager.go:215] "Topology Admit Handler" podUID="d8b663cc9535f2d4b63e3b0e74d26e3c" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 20 21:26:46.747778 kubelet[2752]: I0320 21:26:46.747761 2752 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 20 21:26:46.747834 kubelet[2752]: I0320 21:26:46.747812 2752 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 20 21:26:46.839526 kubelet[2752]: I0320 21:26:46.839461 2752 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 20 21:26:46.839665 kubelet[2752]: I0320 21:26:46.839538 2752 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 20 21:26:46.936437 kubelet[2752]: I0320 21:26:46.936394 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:26:46.936437 kubelet[2752]: I0320 21:26:46.936439 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 20 21:26:46.936618 kubelet[2752]: I0320 21:26:46.936471 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8b663cc9535f2d4b63e3b0e74d26e3c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d8b663cc9535f2d4b63e3b0e74d26e3c\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:26:46.936618 kubelet[2752]: I0320 21:26:46.936490 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:26:46.936618 kubelet[2752]: I0320 21:26:46.936506 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:26:46.936618 kubelet[2752]: I0320 21:26:46.936524 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:26:46.936618 kubelet[2752]: I0320 21:26:46.936538 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:26:46.936784 kubelet[2752]: I0320 21:26:46.936555 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d8b663cc9535f2d4b63e3b0e74d26e3c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d8b663cc9535f2d4b63e3b0e74d26e3c\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:26:46.936784 kubelet[2752]: I0320 21:26:46.936571 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d8b663cc9535f2d4b63e3b0e74d26e3c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d8b663cc9535f2d4b63e3b0e74d26e3c\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:26:47.150844 sudo[2784]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 20 21:26:47.151287 sudo[2784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 20 21:26:47.602499 sudo[2784]: pam_unix(sudo:session): session closed for user root Mar 20 21:26:47.628969 kubelet[2752]: I0320 21:26:47.628931 2752 apiserver.go:52] "Watching apiserver" Mar 20 21:26:47.636987 kubelet[2752]: I0320 21:26:47.636947 2752 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 20 21:26:47.695175 kubelet[2752]: I0320 21:26:47.695106 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6950904530000002 podStartE2EDuration="1.695090453s" podCreationTimestamp="2025-03-20 21:26:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:26:47.695038235 +0000 UTC m=+1.226760647" watchObservedRunningTime="2025-03-20 21:26:47.695090453 +0000 UTC m=+1.226812865" Mar 20 21:26:47.695175 kubelet[2752]: E0320 21:26:47.695166 2752 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 20 21:26:47.695175 kubelet[2752]: E0320 21:26:47.695179 2752 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 20 21:26:47.779597 kubelet[2752]: I0320 21:26:47.779514 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7794922560000002 podStartE2EDuration="1.779492256s" podCreationTimestamp="2025-03-20 21:26:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:26:47.709139284 +0000 UTC m=+1.240861696" watchObservedRunningTime="2025-03-20 21:26:47.779492256 +0000 UTC m=+1.311214668" Mar 20 21:26:47.810442 kubelet[2752]: I0320 21:26:47.810397 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.810381664 podStartE2EDuration="1.810381664s" podCreationTimestamp="2025-03-20 21:26:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:26:47.779631507 +0000 UTC m=+1.311353919" watchObservedRunningTime="2025-03-20 21:26:47.810381664 +0000 UTC m=+1.342104076" Mar 20 21:26:48.843128 sudo[1704]: pam_unix(sudo:session): session closed for user root Mar 20 21:26:48.844719 sshd[1703]: Connection closed by 10.0.0.1 port 36464 Mar 20 21:26:48.845112 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Mar 20 21:26:48.848975 systemd[1]: sshd@6-10.0.0.79:22-10.0.0.1:36464.service: Deactivated successfully. Mar 20 21:26:48.851314 systemd[1]: session-7.scope: Deactivated successfully. Mar 20 21:26:48.851542 systemd[1]: session-7.scope: Consumed 3.996s CPU time, 270.8M memory peak. Mar 20 21:26:48.852801 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. Mar 20 21:26:48.853641 systemd-logind[1483]: Removed session 7. Mar 20 21:27:01.499025 update_engine[1485]: I20250320 21:27:01.498952 1485 update_attempter.cc:509] Updating boot flags... Mar 20 21:27:01.529419 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2834) Mar 20 21:27:01.568293 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2838) Mar 20 21:27:01.614556 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2838) Mar 20 21:27:01.844335 kubelet[2752]: I0320 21:27:01.842971 2752 topology_manager.go:215] "Topology Admit Handler" podUID="66ffd196-83bf-4d6a-ba89-eccf76f4ec12" podNamespace="kube-system" podName="kube-proxy-4zd7b" Mar 20 21:27:01.847511 kubelet[2752]: I0320 21:27:01.847467 2752 topology_manager.go:215] "Topology Admit Handler" podUID="552218fd-bedf-4096-aa60-95b93cda75a6" podNamespace="kube-system" podName="cilium-8gcdq" Mar 20 21:27:01.856140 systemd[1]: Created slice kubepods-besteffort-pod66ffd196_83bf_4d6a_ba89_eccf76f4ec12.slice - libcontainer container kubepods-besteffort-pod66ffd196_83bf_4d6a_ba89_eccf76f4ec12.slice. Mar 20 21:27:01.871089 systemd[1]: Created slice kubepods-burstable-pod552218fd_bedf_4096_aa60_95b93cda75a6.slice - libcontainer container kubepods-burstable-pod552218fd_bedf_4096_aa60_95b93cda75a6.slice. Mar 20 21:27:01.885884 kubelet[2752]: I0320 21:27:01.885860 2752 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 20 21:27:01.886332 containerd[1501]: time="2025-03-20T21:27:01.886293113Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 20 21:27:01.886649 kubelet[2752]: I0320 21:27:01.886473 2752 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 20 21:27:02.033347 kubelet[2752]: I0320 21:27:02.033305 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-bpf-maps\") pod \"cilium-8gcdq\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " pod="kube-system/cilium-8gcdq" Mar 20 21:27:02.033433 kubelet[2752]: I0320 21:27:02.033362 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-cni-path\") pod \"cilium-8gcdq\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " pod="kube-system/cilium-8gcdq" Mar 20 21:27:02.033433 kubelet[2752]: I0320 21:27:02.033384 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-host-proc-sys-net\") pod \"cilium-8gcdq\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " pod="kube-system/cilium-8gcdq" Mar 20 21:27:02.033433 kubelet[2752]: I0320 21:27:02.033399 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/552218fd-bedf-4096-aa60-95b93cda75a6-clustermesh-secrets\") pod \"cilium-8gcdq\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " pod="kube-system/cilium-8gcdq" Mar 20 21:27:02.033433 kubelet[2752]: I0320 21:27:02.033415 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-host-proc-sys-kernel\") pod \"cilium-8gcdq\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " pod="kube-system/cilium-8gcdq" Mar 20 21:27:02.033433 kubelet[2752]: I0320 21:27:02.033433 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-xtables-lock\") pod \"cilium-8gcdq\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " pod="kube-system/cilium-8gcdq" Mar 20 21:27:02.033561 kubelet[2752]: I0320 21:27:02.033451 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66ffd196-83bf-4d6a-ba89-eccf76f4ec12-xtables-lock\") pod \"kube-proxy-4zd7b\" (UID: \"66ffd196-83bf-4d6a-ba89-eccf76f4ec12\") " pod="kube-system/kube-proxy-4zd7b" Mar 20 21:27:02.033561 kubelet[2752]: I0320 21:27:02.033508 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66ffd196-83bf-4d6a-ba89-eccf76f4ec12-lib-modules\") pod \"kube-proxy-4zd7b\" (UID: \"66ffd196-83bf-4d6a-ba89-eccf76f4ec12\") " pod="kube-system/kube-proxy-4zd7b" Mar 20 21:27:02.033561 kubelet[2752]: I0320 21:27:02.033530 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qdpp\" (UniqueName: \"kubernetes.io/projected/66ffd196-83bf-4d6a-ba89-eccf76f4ec12-kube-api-access-2qdpp\") pod \"kube-proxy-4zd7b\" (UID: \"66ffd196-83bf-4d6a-ba89-eccf76f4ec12\") " pod="kube-system/kube-proxy-4zd7b" Mar 20 21:27:02.033561 kubelet[2752]: I0320 21:27:02.033549 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-lib-modules\") pod \"cilium-8gcdq\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " pod="kube-system/cilium-8gcdq" Mar 20 21:27:02.033653 kubelet[2752]: I0320 21:27:02.033563 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/552218fd-bedf-4096-aa60-95b93cda75a6-hubble-tls\") pod \"cilium-8gcdq\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " pod="kube-system/cilium-8gcdq" Mar 20 21:27:02.033653 kubelet[2752]: I0320 21:27:02.033601 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-etc-cni-netd\") pod \"cilium-8gcdq\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " pod="kube-system/cilium-8gcdq" Mar 20 21:27:02.033653 kubelet[2752]: I0320 21:27:02.033639 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/552218fd-bedf-4096-aa60-95b93cda75a6-cilium-config-path\") pod \"cilium-8gcdq\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " pod="kube-system/cilium-8gcdq" Mar 20 21:27:02.033725 kubelet[2752]: I0320 21:27:02.033682 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6dmt\" (UniqueName: \"kubernetes.io/projected/552218fd-bedf-4096-aa60-95b93cda75a6-kube-api-access-k6dmt\") pod \"cilium-8gcdq\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " pod="kube-system/cilium-8gcdq" Mar 20 21:27:02.033725 kubelet[2752]: I0320 21:27:02.033713 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-cilium-run\") pod \"cilium-8gcdq\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " pod="kube-system/cilium-8gcdq" Mar 20 21:27:02.033771 kubelet[2752]: I0320 21:27:02.033739 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-hostproc\") pod \"cilium-8gcdq\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " pod="kube-system/cilium-8gcdq" Mar 20 21:27:02.033771 kubelet[2752]: I0320 21:27:02.033759 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/66ffd196-83bf-4d6a-ba89-eccf76f4ec12-kube-proxy\") pod \"kube-proxy-4zd7b\" (UID: \"66ffd196-83bf-4d6a-ba89-eccf76f4ec12\") " pod="kube-system/kube-proxy-4zd7b" Mar 20 21:27:02.033820 kubelet[2752]: I0320 21:27:02.033774 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-cilium-cgroup\") pod \"cilium-8gcdq\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " pod="kube-system/cilium-8gcdq" Mar 20 21:27:02.140645 kubelet[2752]: E0320 21:27:02.139439 2752 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 20 21:27:02.140645 kubelet[2752]: E0320 21:27:02.139948 2752 projected.go:200] Error preparing data for projected volume kube-api-access-k6dmt for pod kube-system/cilium-8gcdq: configmap "kube-root-ca.crt" not found Mar 20 21:27:02.140645 kubelet[2752]: E0320 21:27:02.140005 2752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/552218fd-bedf-4096-aa60-95b93cda75a6-kube-api-access-k6dmt podName:552218fd-bedf-4096-aa60-95b93cda75a6 nodeName:}" failed. No retries permitted until 2025-03-20 21:27:02.639988023 +0000 UTC m=+16.171710435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k6dmt" (UniqueName: "kubernetes.io/projected/552218fd-bedf-4096-aa60-95b93cda75a6-kube-api-access-k6dmt") pod "cilium-8gcdq" (UID: "552218fd-bedf-4096-aa60-95b93cda75a6") : configmap "kube-root-ca.crt" not found Mar 20 21:27:02.140645 kubelet[2752]: E0320 21:27:02.140430 2752 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 20 21:27:02.140645 kubelet[2752]: E0320 21:27:02.140453 2752 projected.go:200] Error preparing data for projected volume kube-api-access-2qdpp for pod kube-system/kube-proxy-4zd7b: configmap "kube-root-ca.crt" not found Mar 20 21:27:02.140645 kubelet[2752]: E0320 21:27:02.140490 2752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/66ffd196-83bf-4d6a-ba89-eccf76f4ec12-kube-api-access-2qdpp podName:66ffd196-83bf-4d6a-ba89-eccf76f4ec12 nodeName:}" failed. No retries permitted until 2025-03-20 21:27:02.640474865 +0000 UTC m=+16.172197267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2qdpp" (UniqueName: "kubernetes.io/projected/66ffd196-83bf-4d6a-ba89-eccf76f4ec12-kube-api-access-2qdpp") pod "kube-proxy-4zd7b" (UID: "66ffd196-83bf-4d6a-ba89-eccf76f4ec12") : configmap "kube-root-ca.crt" not found Mar 20 21:27:02.739740 kubelet[2752]: E0320 21:27:02.739705 2752 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 20 21:27:02.739740 kubelet[2752]: E0320 21:27:02.739729 2752 projected.go:200] Error preparing data for projected volume kube-api-access-k6dmt for pod kube-system/cilium-8gcdq: configmap "kube-root-ca.crt" not found Mar 20 21:27:02.739907 kubelet[2752]: E0320 21:27:02.739762 2752 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 20 21:27:02.739907 kubelet[2752]: E0320 21:27:02.739781 2752 projected.go:200] Error preparing data for projected volume kube-api-access-2qdpp for pod kube-system/kube-proxy-4zd7b: configmap "kube-root-ca.crt" not found Mar 20 21:27:02.739907 kubelet[2752]: E0320 21:27:02.739770 2752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/552218fd-bedf-4096-aa60-95b93cda75a6-kube-api-access-k6dmt podName:552218fd-bedf-4096-aa60-95b93cda75a6 nodeName:}" failed. No retries permitted until 2025-03-20 21:27:03.739755525 +0000 UTC m=+17.271477937 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-k6dmt" (UniqueName: "kubernetes.io/projected/552218fd-bedf-4096-aa60-95b93cda75a6-kube-api-access-k6dmt") pod "cilium-8gcdq" (UID: "552218fd-bedf-4096-aa60-95b93cda75a6") : configmap "kube-root-ca.crt" not found Mar 20 21:27:02.739907 kubelet[2752]: E0320 21:27:02.739830 2752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/66ffd196-83bf-4d6a-ba89-eccf76f4ec12-kube-api-access-2qdpp podName:66ffd196-83bf-4d6a-ba89-eccf76f4ec12 nodeName:}" failed. No retries permitted until 2025-03-20 21:27:03.739815018 +0000 UTC m=+17.271537430 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2qdpp" (UniqueName: "kubernetes.io/projected/66ffd196-83bf-4d6a-ba89-eccf76f4ec12-kube-api-access-2qdpp") pod "kube-proxy-4zd7b" (UID: "66ffd196-83bf-4d6a-ba89-eccf76f4ec12") : configmap "kube-root-ca.crt" not found Mar 20 21:27:03.037788 kubelet[2752]: I0320 21:27:03.037124 2752 topology_manager.go:215] "Topology Admit Handler" podUID="fecb29e0-de4f-4ce6-8305-c177206400d6" podNamespace="kube-system" podName="cilium-operator-599987898-94k29" Mar 20 21:27:03.043438 kubelet[2752]: I0320 21:27:03.043351 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqssb\" (UniqueName: \"kubernetes.io/projected/fecb29e0-de4f-4ce6-8305-c177206400d6-kube-api-access-rqssb\") pod \"cilium-operator-599987898-94k29\" (UID: \"fecb29e0-de4f-4ce6-8305-c177206400d6\") " pod="kube-system/cilium-operator-599987898-94k29" Mar 20 21:27:03.043438 kubelet[2752]: I0320 21:27:03.043399 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fecb29e0-de4f-4ce6-8305-c177206400d6-cilium-config-path\") pod \"cilium-operator-599987898-94k29\" (UID: \"fecb29e0-de4f-4ce6-8305-c177206400d6\") " pod="kube-system/cilium-operator-599987898-94k29" Mar 20 21:27:03.051502 systemd[1]: Created slice kubepods-besteffort-podfecb29e0_de4f_4ce6_8305_c177206400d6.slice - libcontainer container kubepods-besteffort-podfecb29e0_de4f_4ce6_8305_c177206400d6.slice. Mar 20 21:27:03.355123 containerd[1501]: time="2025-03-20T21:27:03.354993657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-94k29,Uid:fecb29e0-de4f-4ce6-8305-c177206400d6,Namespace:kube-system,Attempt:0,}" Mar 20 21:27:03.559159 containerd[1501]: time="2025-03-20T21:27:03.559089139Z" level=info msg="connecting to shim a69dac36ec00b9f0364bbadc0e9a25f4b86ae2dee35d93e55fa18f0f92f0d3c1" address="unix:///run/containerd/s/9f1ed16b252f08ddba8ff6bccfba155a46f480fd8d35a91512bebb26f66ed6a4" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:27:03.607416 systemd[1]: Started cri-containerd-a69dac36ec00b9f0364bbadc0e9a25f4b86ae2dee35d93e55fa18f0f92f0d3c1.scope - libcontainer container a69dac36ec00b9f0364bbadc0e9a25f4b86ae2dee35d93e55fa18f0f92f0d3c1. Mar 20 21:27:03.683393 containerd[1501]: time="2025-03-20T21:27:03.683348381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-94k29,Uid:fecb29e0-de4f-4ce6-8305-c177206400d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a69dac36ec00b9f0364bbadc0e9a25f4b86ae2dee35d93e55fa18f0f92f0d3c1\"" Mar 20 21:27:03.687476 containerd[1501]: time="2025-03-20T21:27:03.686842460Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 20 21:27:03.970121 containerd[1501]: time="2025-03-20T21:27:03.970079645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4zd7b,Uid:66ffd196-83bf-4d6a-ba89-eccf76f4ec12,Namespace:kube-system,Attempt:0,}" Mar 20 21:27:03.974852 containerd[1501]: time="2025-03-20T21:27:03.974803270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8gcdq,Uid:552218fd-bedf-4096-aa60-95b93cda75a6,Namespace:kube-system,Attempt:0,}" Mar 20 21:27:04.344040 containerd[1501]: time="2025-03-20T21:27:04.343879917Z" level=info msg="connecting to shim b7346f79c6637624848216baed31339389a8d5e127bfc091438024cfde7d760d" address="unix:///run/containerd/s/f9e15d1e491b8457fec71fae53022ada3a164696af9bfce61ed2dec896555f60" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:27:04.347080 containerd[1501]: time="2025-03-20T21:27:04.347031776Z" level=info msg="connecting to shim 24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e" address="unix:///run/containerd/s/d2a480ccb3df41a1b215485794f0352c934848baef05be692f8b5fb4fa3b6cb6" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:27:04.379433 systemd[1]: Started cri-containerd-b7346f79c6637624848216baed31339389a8d5e127bfc091438024cfde7d760d.scope - libcontainer container b7346f79c6637624848216baed31339389a8d5e127bfc091438024cfde7d760d. Mar 20 21:27:04.382723 systemd[1]: Started cri-containerd-24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e.scope - libcontainer container 24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e. Mar 20 21:27:04.409884 containerd[1501]: time="2025-03-20T21:27:04.409833159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4zd7b,Uid:66ffd196-83bf-4d6a-ba89-eccf76f4ec12,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7346f79c6637624848216baed31339389a8d5e127bfc091438024cfde7d760d\"" Mar 20 21:27:04.412755 containerd[1501]: time="2025-03-20T21:27:04.412725216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8gcdq,Uid:552218fd-bedf-4096-aa60-95b93cda75a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e\"" Mar 20 21:27:04.414084 containerd[1501]: time="2025-03-20T21:27:04.414059418Z" level=info msg="CreateContainer within sandbox \"b7346f79c6637624848216baed31339389a8d5e127bfc091438024cfde7d760d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 20 21:27:04.426221 containerd[1501]: time="2025-03-20T21:27:04.426198176Z" level=info msg="Container c7bbb6d5b6be7c69907044c8289bb0d6749760952db1611630f14377dcbbde1c: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:27:04.435305 containerd[1501]: time="2025-03-20T21:27:04.435247834Z" level=info msg="CreateContainer within sandbox \"b7346f79c6637624848216baed31339389a8d5e127bfc091438024cfde7d760d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c7bbb6d5b6be7c69907044c8289bb0d6749760952db1611630f14377dcbbde1c\"" Mar 20 21:27:04.435780 containerd[1501]: time="2025-03-20T21:27:04.435752949Z" level=info msg="StartContainer for \"c7bbb6d5b6be7c69907044c8289bb0d6749760952db1611630f14377dcbbde1c\"" Mar 20 21:27:04.437058 containerd[1501]: time="2025-03-20T21:27:04.437031025Z" level=info msg="connecting to shim c7bbb6d5b6be7c69907044c8289bb0d6749760952db1611630f14377dcbbde1c" address="unix:///run/containerd/s/f9e15d1e491b8457fec71fae53022ada3a164696af9bfce61ed2dec896555f60" protocol=ttrpc version=3 Mar 20 21:27:04.462382 systemd[1]: Started cri-containerd-c7bbb6d5b6be7c69907044c8289bb0d6749760952db1611630f14377dcbbde1c.scope - libcontainer container c7bbb6d5b6be7c69907044c8289bb0d6749760952db1611630f14377dcbbde1c. Mar 20 21:27:04.505077 containerd[1501]: time="2025-03-20T21:27:04.505032227Z" level=info msg="StartContainer for \"c7bbb6d5b6be7c69907044c8289bb0d6749760952db1611630f14377dcbbde1c\" returns successfully" Mar 20 21:27:06.660512 kubelet[2752]: I0320 21:27:06.660449 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4zd7b" podStartSLOduration=5.660432981 podStartE2EDuration="5.660432981s" podCreationTimestamp="2025-03-20 21:27:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:27:04.715537265 +0000 UTC m=+18.247259667" watchObservedRunningTime="2025-03-20 21:27:06.660432981 +0000 UTC m=+20.192155393" Mar 20 21:27:07.054487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3796495022.mount: Deactivated successfully. Mar 20 21:27:07.514636 containerd[1501]: time="2025-03-20T21:27:07.514583383Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:07.515404 containerd[1501]: time="2025-03-20T21:27:07.515327448Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 20 21:27:07.516522 containerd[1501]: time="2025-03-20T21:27:07.516487969Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:07.517598 containerd[1501]: time="2025-03-20T21:27:07.517568068Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.830674209s" Mar 20 21:27:07.517633 containerd[1501]: time="2025-03-20T21:27:07.517596782Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 20 21:27:07.518597 containerd[1501]: time="2025-03-20T21:27:07.518561462Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 20 21:27:07.519673 containerd[1501]: time="2025-03-20T21:27:07.519653033Z" level=info msg="CreateContainer within sandbox \"a69dac36ec00b9f0364bbadc0e9a25f4b86ae2dee35d93e55fa18f0f92f0d3c1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 20 21:27:07.528501 containerd[1501]: time="2025-03-20T21:27:07.528467111Z" level=info msg="Container b83c388802ac5937a8fe0dbe600f1dae0739815ae68ffe5138ce276e2b08dc41: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:27:07.535251 containerd[1501]: time="2025-03-20T21:27:07.535223496Z" level=info msg="CreateContainer within sandbox \"a69dac36ec00b9f0364bbadc0e9a25f4b86ae2dee35d93e55fa18f0f92f0d3c1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b83c388802ac5937a8fe0dbe600f1dae0739815ae68ffe5138ce276e2b08dc41\"" Mar 20 21:27:07.535641 containerd[1501]: time="2025-03-20T21:27:07.535613242Z" level=info msg="StartContainer for \"b83c388802ac5937a8fe0dbe600f1dae0739815ae68ffe5138ce276e2b08dc41\"" Mar 20 21:27:07.536376 containerd[1501]: time="2025-03-20T21:27:07.536337168Z" level=info msg="connecting to shim b83c388802ac5937a8fe0dbe600f1dae0739815ae68ffe5138ce276e2b08dc41" address="unix:///run/containerd/s/9f1ed16b252f08ddba8ff6bccfba155a46f480fd8d35a91512bebb26f66ed6a4" protocol=ttrpc version=3 Mar 20 21:27:07.567450 systemd[1]: Started cri-containerd-b83c388802ac5937a8fe0dbe600f1dae0739815ae68ffe5138ce276e2b08dc41.scope - libcontainer container b83c388802ac5937a8fe0dbe600f1dae0739815ae68ffe5138ce276e2b08dc41. Mar 20 21:27:07.595711 containerd[1501]: time="2025-03-20T21:27:07.595643061Z" level=info msg="StartContainer for \"b83c388802ac5937a8fe0dbe600f1dae0739815ae68ffe5138ce276e2b08dc41\" returns successfully" Mar 20 21:27:08.682043 systemd[1]: Started sshd@7-10.0.0.79:22-10.0.0.1:37846.service - OpenSSH per-connection server daemon (10.0.0.1:37846). Mar 20 21:27:08.734687 sshd[3186]: Accepted publickey for core from 10.0.0.1 port 37846 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:27:08.736180 sshd-session[3186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:08.740953 systemd-logind[1483]: New session 8 of user core. Mar 20 21:27:08.747378 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 20 21:27:08.877345 sshd[3188]: Connection closed by 10.0.0.1 port 37846 Mar 20 21:27:08.877665 sshd-session[3186]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:08.881501 systemd[1]: sshd@7-10.0.0.79:22-10.0.0.1:37846.service: Deactivated successfully. Mar 20 21:27:08.883651 systemd[1]: session-8.scope: Deactivated successfully. Mar 20 21:27:08.884526 systemd-logind[1483]: Session 8 logged out. Waiting for processes to exit. Mar 20 21:27:08.885605 systemd-logind[1483]: Removed session 8. Mar 20 21:27:13.893694 systemd[1]: Started sshd@8-10.0.0.79:22-10.0.0.1:37856.service - OpenSSH per-connection server daemon (10.0.0.1:37856). Mar 20 21:27:13.944988 sshd[3202]: Accepted publickey for core from 10.0.0.1 port 37856 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:27:13.946493 sshd-session[3202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:13.951558 systemd-logind[1483]: New session 9 of user core. Mar 20 21:27:13.960452 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 20 21:27:14.086345 sshd[3204]: Connection closed by 10.0.0.1 port 37856 Mar 20 21:27:14.086708 sshd-session[3202]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:14.091697 systemd[1]: sshd@8-10.0.0.79:22-10.0.0.1:37856.service: Deactivated successfully. Mar 20 21:27:14.093823 systemd[1]: session-9.scope: Deactivated successfully. Mar 20 21:27:14.094531 systemd-logind[1483]: Session 9 logged out. Waiting for processes to exit. Mar 20 21:27:14.095388 systemd-logind[1483]: Removed session 9. Mar 20 21:27:17.561187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount89872889.mount: Deactivated successfully. Mar 20 21:27:19.107214 systemd[1]: Started sshd@9-10.0.0.79:22-10.0.0.1:39822.service - OpenSSH per-connection server daemon (10.0.0.1:39822). Mar 20 21:27:19.309137 sshd[3234]: Accepted publickey for core from 10.0.0.1 port 39822 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:27:19.311003 sshd-session[3234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:19.316187 systemd-logind[1483]: New session 10 of user core. Mar 20 21:27:19.324434 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 20 21:27:19.472280 sshd[3244]: Connection closed by 10.0.0.1 port 39822 Mar 20 21:27:19.472454 sshd-session[3234]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:19.477333 systemd[1]: sshd@9-10.0.0.79:22-10.0.0.1:39822.service: Deactivated successfully. Mar 20 21:27:19.479602 systemd[1]: session-10.scope: Deactivated successfully. Mar 20 21:27:19.481926 systemd-logind[1483]: Session 10 logged out. Waiting for processes to exit. Mar 20 21:27:19.483835 systemd-logind[1483]: Removed session 10. Mar 20 21:27:21.780766 containerd[1501]: time="2025-03-20T21:27:21.780694094Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:21.781581 containerd[1501]: time="2025-03-20T21:27:21.781484189Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 20 21:27:21.783958 containerd[1501]: time="2025-03-20T21:27:21.783919648Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:27:21.785642 containerd[1501]: time="2025-03-20T21:27:21.785596280Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.266995753s" Mar 20 21:27:21.785642 containerd[1501]: time="2025-03-20T21:27:21.785636036Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 20 21:27:21.788082 containerd[1501]: time="2025-03-20T21:27:21.788045586Z" level=info msg="CreateContainer within sandbox \"24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 20 21:27:21.964914 containerd[1501]: time="2025-03-20T21:27:21.964861867Z" level=info msg="Container 0c2276917a0c449ac93a17aaba16f47109978dd3cb1847c4f2418d0d3a00a6b2: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:27:22.064383 containerd[1501]: time="2025-03-20T21:27:22.064218152Z" level=info msg="CreateContainer within sandbox \"24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0c2276917a0c449ac93a17aaba16f47109978dd3cb1847c4f2418d0d3a00a6b2\"" Mar 20 21:27:22.064656 containerd[1501]: time="2025-03-20T21:27:22.064626029Z" level=info msg="StartContainer for \"0c2276917a0c449ac93a17aaba16f47109978dd3cb1847c4f2418d0d3a00a6b2\"" Mar 20 21:27:22.065457 containerd[1501]: time="2025-03-20T21:27:22.065350632Z" level=info msg="connecting to shim 0c2276917a0c449ac93a17aaba16f47109978dd3cb1847c4f2418d0d3a00a6b2" address="unix:///run/containerd/s/d2a480ccb3df41a1b215485794f0352c934848baef05be692f8b5fb4fa3b6cb6" protocol=ttrpc version=3 Mar 20 21:27:22.093394 systemd[1]: Started cri-containerd-0c2276917a0c449ac93a17aaba16f47109978dd3cb1847c4f2418d0d3a00a6b2.scope - libcontainer container 0c2276917a0c449ac93a17aaba16f47109978dd3cb1847c4f2418d0d3a00a6b2. Mar 20 21:27:22.150233 systemd[1]: cri-containerd-0c2276917a0c449ac93a17aaba16f47109978dd3cb1847c4f2418d0d3a00a6b2.scope: Deactivated successfully. Mar 20 21:27:22.152195 containerd[1501]: time="2025-03-20T21:27:22.152143067Z" level=info msg="StartContainer for \"0c2276917a0c449ac93a17aaba16f47109978dd3cb1847c4f2418d0d3a00a6b2\" returns successfully" Mar 20 21:27:22.152315 containerd[1501]: time="2025-03-20T21:27:22.152213950Z" level=info msg="received exit event container_id:\"0c2276917a0c449ac93a17aaba16f47109978dd3cb1847c4f2418d0d3a00a6b2\" id:\"0c2276917a0c449ac93a17aaba16f47109978dd3cb1847c4f2418d0d3a00a6b2\" pid:3273 exited_at:{seconds:1742506042 nanos:151793910}" Mar 20 21:27:22.152361 containerd[1501]: time="2025-03-20T21:27:22.152309079Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c2276917a0c449ac93a17aaba16f47109978dd3cb1847c4f2418d0d3a00a6b2\" id:\"0c2276917a0c449ac93a17aaba16f47109978dd3cb1847c4f2418d0d3a00a6b2\" pid:3273 exited_at:{seconds:1742506042 nanos:151793910}" Mar 20 21:27:22.173911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c2276917a0c449ac93a17aaba16f47109978dd3cb1847c4f2418d0d3a00a6b2-rootfs.mount: Deactivated successfully. Mar 20 21:27:23.303803 kubelet[2752]: I0320 21:27:23.303570 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-94k29" podStartSLOduration=17.46969829 podStartE2EDuration="21.303553024s" podCreationTimestamp="2025-03-20 21:27:02 +0000 UTC" firstStartedPulling="2025-03-20 21:27:03.684503115 +0000 UTC m=+17.216225527" lastFinishedPulling="2025-03-20 21:27:07.518357849 +0000 UTC m=+21.050080261" observedRunningTime="2025-03-20 21:27:07.716742464 +0000 UTC m=+21.248464876" watchObservedRunningTime="2025-03-20 21:27:23.303553024 +0000 UTC m=+36.835275436" Mar 20 21:27:24.484373 systemd[1]: Started sshd@10-10.0.0.79:22-10.0.0.1:39824.service - OpenSSH per-connection server daemon (10.0.0.1:39824). Mar 20 21:27:24.541283 sshd[3308]: Accepted publickey for core from 10.0.0.1 port 39824 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:27:24.543423 sshd-session[3308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:24.547931 systemd-logind[1483]: New session 11 of user core. Mar 20 21:27:24.558458 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 20 21:27:24.673128 sshd[3310]: Connection closed by 10.0.0.1 port 39824 Mar 20 21:27:24.673519 sshd-session[3308]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:24.678449 systemd[1]: sshd@10-10.0.0.79:22-10.0.0.1:39824.service: Deactivated successfully. Mar 20 21:27:24.680611 systemd[1]: session-11.scope: Deactivated successfully. Mar 20 21:27:24.681283 systemd-logind[1483]: Session 11 logged out. Waiting for processes to exit. Mar 20 21:27:24.682164 systemd-logind[1483]: Removed session 11. Mar 20 21:27:24.723364 containerd[1501]: time="2025-03-20T21:27:24.723323942Z" level=info msg="CreateContainer within sandbox \"24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 20 21:27:25.027002 containerd[1501]: time="2025-03-20T21:27:25.026907474Z" level=info msg="Container cd2658d5ca5eb7ac620f99e95fd7f72aa72ad7a269928a716aa95fec786aade1: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:27:25.291434 containerd[1501]: time="2025-03-20T21:27:25.291294151Z" level=info msg="CreateContainer within sandbox \"24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cd2658d5ca5eb7ac620f99e95fd7f72aa72ad7a269928a716aa95fec786aade1\"" Mar 20 21:27:25.292044 containerd[1501]: time="2025-03-20T21:27:25.291957828Z" level=info msg="StartContainer for \"cd2658d5ca5eb7ac620f99e95fd7f72aa72ad7a269928a716aa95fec786aade1\"" Mar 20 21:27:25.293025 containerd[1501]: time="2025-03-20T21:27:25.292990509Z" level=info msg="connecting to shim cd2658d5ca5eb7ac620f99e95fd7f72aa72ad7a269928a716aa95fec786aade1" address="unix:///run/containerd/s/d2a480ccb3df41a1b215485794f0352c934848baef05be692f8b5fb4fa3b6cb6" protocol=ttrpc version=3 Mar 20 21:27:25.314395 systemd[1]: Started cri-containerd-cd2658d5ca5eb7ac620f99e95fd7f72aa72ad7a269928a716aa95fec786aade1.scope - libcontainer container cd2658d5ca5eb7ac620f99e95fd7f72aa72ad7a269928a716aa95fec786aade1. Mar 20 21:27:25.406613 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 20 21:27:25.407198 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:27:25.407552 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:27:25.409686 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:27:25.410219 containerd[1501]: time="2025-03-20T21:27:25.410187720Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd2658d5ca5eb7ac620f99e95fd7f72aa72ad7a269928a716aa95fec786aade1\" id:\"cd2658d5ca5eb7ac620f99e95fd7f72aa72ad7a269928a716aa95fec786aade1\" pid:3335 exited_at:{seconds:1742506045 nanos:409764224}" Mar 20 21:27:25.411941 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 20 21:27:25.412669 systemd[1]: cri-containerd-cd2658d5ca5eb7ac620f99e95fd7f72aa72ad7a269928a716aa95fec786aade1.scope: Deactivated successfully. Mar 20 21:27:25.474447 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:27:25.483760 containerd[1501]: time="2025-03-20T21:27:25.483722721Z" level=info msg="received exit event container_id:\"cd2658d5ca5eb7ac620f99e95fd7f72aa72ad7a269928a716aa95fec786aade1\" id:\"cd2658d5ca5eb7ac620f99e95fd7f72aa72ad7a269928a716aa95fec786aade1\" pid:3335 exited_at:{seconds:1742506045 nanos:409764224}" Mar 20 21:27:25.485064 containerd[1501]: time="2025-03-20T21:27:25.485019738Z" level=info msg="StartContainer for \"cd2658d5ca5eb7ac620f99e95fd7f72aa72ad7a269928a716aa95fec786aade1\" returns successfully" Mar 20 21:27:25.726954 containerd[1501]: time="2025-03-20T21:27:25.726896810Z" level=info msg="CreateContainer within sandbox \"24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 20 21:27:25.777725 containerd[1501]: time="2025-03-20T21:27:25.777671195Z" level=info msg="Container 77662f9ceb73714de4a91b3afb8885f3a821cad3481a32e3256d5b97beed5761: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:27:25.949225 containerd[1501]: time="2025-03-20T21:27:25.949172733Z" level=info msg="CreateContainer within sandbox \"24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"77662f9ceb73714de4a91b3afb8885f3a821cad3481a32e3256d5b97beed5761\"" Mar 20 21:27:25.949706 containerd[1501]: time="2025-03-20T21:27:25.949664688Z" level=info msg="StartContainer for \"77662f9ceb73714de4a91b3afb8885f3a821cad3481a32e3256d5b97beed5761\"" Mar 20 21:27:25.951176 containerd[1501]: time="2025-03-20T21:27:25.951094545Z" level=info msg="connecting to shim 77662f9ceb73714de4a91b3afb8885f3a821cad3481a32e3256d5b97beed5761" address="unix:///run/containerd/s/d2a480ccb3df41a1b215485794f0352c934848baef05be692f8b5fb4fa3b6cb6" protocol=ttrpc version=3 Mar 20 21:27:25.979450 systemd[1]: Started cri-containerd-77662f9ceb73714de4a91b3afb8885f3a821cad3481a32e3256d5b97beed5761.scope - libcontainer container 77662f9ceb73714de4a91b3afb8885f3a821cad3481a32e3256d5b97beed5761. Mar 20 21:27:26.020346 systemd[1]: cri-containerd-77662f9ceb73714de4a91b3afb8885f3a821cad3481a32e3256d5b97beed5761.scope: Deactivated successfully. Mar 20 21:27:26.021146 containerd[1501]: time="2025-03-20T21:27:26.021104998Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77662f9ceb73714de4a91b3afb8885f3a821cad3481a32e3256d5b97beed5761\" id:\"77662f9ceb73714de4a91b3afb8885f3a821cad3481a32e3256d5b97beed5761\" pid:3381 exited_at:{seconds:1742506046 nanos:20906545}" Mar 20 21:27:26.029203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd2658d5ca5eb7ac620f99e95fd7f72aa72ad7a269928a716aa95fec786aade1-rootfs.mount: Deactivated successfully. Mar 20 21:27:26.132677 containerd[1501]: time="2025-03-20T21:27:26.132620125Z" level=info msg="received exit event container_id:\"77662f9ceb73714de4a91b3afb8885f3a821cad3481a32e3256d5b97beed5761\" id:\"77662f9ceb73714de4a91b3afb8885f3a821cad3481a32e3256d5b97beed5761\" pid:3381 exited_at:{seconds:1742506046 nanos:20906545}" Mar 20 21:27:26.153151 containerd[1501]: time="2025-03-20T21:27:26.153101193Z" level=info msg="StartContainer for \"77662f9ceb73714de4a91b3afb8885f3a821cad3481a32e3256d5b97beed5761\" returns successfully" Mar 20 21:27:26.164599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77662f9ceb73714de4a91b3afb8885f3a821cad3481a32e3256d5b97beed5761-rootfs.mount: Deactivated successfully. Mar 20 21:27:26.731127 containerd[1501]: time="2025-03-20T21:27:26.731073288Z" level=info msg="CreateContainer within sandbox \"24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 20 21:27:26.747459 containerd[1501]: time="2025-03-20T21:27:26.746767984Z" level=info msg="Container e63b4dca13dbf6fe41691bcae48add57404481b161d02b8696c61c018e3edd01: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:27:26.757108 containerd[1501]: time="2025-03-20T21:27:26.757061534Z" level=info msg="CreateContainer within sandbox \"24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e63b4dca13dbf6fe41691bcae48add57404481b161d02b8696c61c018e3edd01\"" Mar 20 21:27:26.757406 containerd[1501]: time="2025-03-20T21:27:26.757381905Z" level=info msg="StartContainer for \"e63b4dca13dbf6fe41691bcae48add57404481b161d02b8696c61c018e3edd01\"" Mar 20 21:27:26.758355 containerd[1501]: time="2025-03-20T21:27:26.758321380Z" level=info msg="connecting to shim e63b4dca13dbf6fe41691bcae48add57404481b161d02b8696c61c018e3edd01" address="unix:///run/containerd/s/d2a480ccb3df41a1b215485794f0352c934848baef05be692f8b5fb4fa3b6cb6" protocol=ttrpc version=3 Mar 20 21:27:26.781413 systemd[1]: Started cri-containerd-e63b4dca13dbf6fe41691bcae48add57404481b161d02b8696c61c018e3edd01.scope - libcontainer container e63b4dca13dbf6fe41691bcae48add57404481b161d02b8696c61c018e3edd01. Mar 20 21:27:26.805715 systemd[1]: cri-containerd-e63b4dca13dbf6fe41691bcae48add57404481b161d02b8696c61c018e3edd01.scope: Deactivated successfully. Mar 20 21:27:26.806413 containerd[1501]: time="2025-03-20T21:27:26.806334865Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e63b4dca13dbf6fe41691bcae48add57404481b161d02b8696c61c018e3edd01\" id:\"e63b4dca13dbf6fe41691bcae48add57404481b161d02b8696c61c018e3edd01\" pid:3421 exited_at:{seconds:1742506046 nanos:805939301}" Mar 20 21:27:26.808508 containerd[1501]: time="2025-03-20T21:27:26.808479113Z" level=info msg="received exit event container_id:\"e63b4dca13dbf6fe41691bcae48add57404481b161d02b8696c61c018e3edd01\" id:\"e63b4dca13dbf6fe41691bcae48add57404481b161d02b8696c61c018e3edd01\" pid:3421 exited_at:{seconds:1742506046 nanos:805939301}" Mar 20 21:27:26.816232 containerd[1501]: time="2025-03-20T21:27:26.816192405Z" level=info msg="StartContainer for \"e63b4dca13dbf6fe41691bcae48add57404481b161d02b8696c61c018e3edd01\" returns successfully" Mar 20 21:27:27.027932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e63b4dca13dbf6fe41691bcae48add57404481b161d02b8696c61c018e3edd01-rootfs.mount: Deactivated successfully. Mar 20 21:27:27.735414 containerd[1501]: time="2025-03-20T21:27:27.735369973Z" level=info msg="CreateContainer within sandbox \"24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 20 21:27:27.749052 containerd[1501]: time="2025-03-20T21:27:27.748135412Z" level=info msg="Container c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:27:27.758348 containerd[1501]: time="2025-03-20T21:27:27.758301630Z" level=info msg="CreateContainer within sandbox \"24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8\"" Mar 20 21:27:27.758788 containerd[1501]: time="2025-03-20T21:27:27.758766243Z" level=info msg="StartContainer for \"c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8\"" Mar 20 21:27:27.759696 containerd[1501]: time="2025-03-20T21:27:27.759675120Z" level=info msg="connecting to shim c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8" address="unix:///run/containerd/s/d2a480ccb3df41a1b215485794f0352c934848baef05be692f8b5fb4fa3b6cb6" protocol=ttrpc version=3 Mar 20 21:27:27.781393 systemd[1]: Started cri-containerd-c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8.scope - libcontainer container c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8. Mar 20 21:27:27.894945 containerd[1501]: time="2025-03-20T21:27:27.894910229Z" level=info msg="StartContainer for \"c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8\" returns successfully" Mar 20 21:27:27.959439 containerd[1501]: time="2025-03-20T21:27:27.959378683Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8\" id:\"692862547b9714b3d968d94bcf00836e13765cbf03b1b6af80219fd6fbd5741c\" pid:3500 exited_at:{seconds:1742506047 nanos:959055957}" Mar 20 21:27:28.031558 kubelet[2752]: I0320 21:27:28.031356 2752 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 20 21:27:28.071238 kubelet[2752]: I0320 21:27:28.071178 2752 topology_manager.go:215] "Topology Admit Handler" podUID="58229e44-87de-48f6-b300-a72606b0406e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-c24w5" Mar 20 21:27:28.071480 kubelet[2752]: I0320 21:27:28.071429 2752 topology_manager.go:215] "Topology Admit Handler" podUID="93040ed3-839e-4912-89a8-2ceae9c2bb09" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2k8hb" Mar 20 21:27:28.080354 systemd[1]: Created slice kubepods-burstable-pod58229e44_87de_48f6_b300_a72606b0406e.slice - libcontainer container kubepods-burstable-pod58229e44_87de_48f6_b300_a72606b0406e.slice. Mar 20 21:27:28.087451 systemd[1]: Created slice kubepods-burstable-pod93040ed3_839e_4912_89a8_2ceae9c2bb09.slice - libcontainer container kubepods-burstable-pod93040ed3_839e_4912_89a8_2ceae9c2bb09.slice. Mar 20 21:27:28.268019 kubelet[2752]: I0320 21:27:28.267982 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzbpm\" (UniqueName: \"kubernetes.io/projected/93040ed3-839e-4912-89a8-2ceae9c2bb09-kube-api-access-rzbpm\") pod \"coredns-7db6d8ff4d-2k8hb\" (UID: \"93040ed3-839e-4912-89a8-2ceae9c2bb09\") " pod="kube-system/coredns-7db6d8ff4d-2k8hb" Mar 20 21:27:28.268019 kubelet[2752]: I0320 21:27:28.268020 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frfxx\" (UniqueName: \"kubernetes.io/projected/58229e44-87de-48f6-b300-a72606b0406e-kube-api-access-frfxx\") pod \"coredns-7db6d8ff4d-c24w5\" (UID: \"58229e44-87de-48f6-b300-a72606b0406e\") " pod="kube-system/coredns-7db6d8ff4d-c24w5" Mar 20 21:27:28.268019 kubelet[2752]: I0320 21:27:28.268035 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58229e44-87de-48f6-b300-a72606b0406e-config-volume\") pod \"coredns-7db6d8ff4d-c24w5\" (UID: \"58229e44-87de-48f6-b300-a72606b0406e\") " pod="kube-system/coredns-7db6d8ff4d-c24w5" Mar 20 21:27:28.268195 kubelet[2752]: I0320 21:27:28.268053 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93040ed3-839e-4912-89a8-2ceae9c2bb09-config-volume\") pod \"coredns-7db6d8ff4d-2k8hb\" (UID: \"93040ed3-839e-4912-89a8-2ceae9c2bb09\") " pod="kube-system/coredns-7db6d8ff4d-2k8hb" Mar 20 21:27:28.385748 containerd[1501]: time="2025-03-20T21:27:28.385629822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c24w5,Uid:58229e44-87de-48f6-b300-a72606b0406e,Namespace:kube-system,Attempt:0,}" Mar 20 21:27:28.390758 containerd[1501]: time="2025-03-20T21:27:28.390708421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2k8hb,Uid:93040ed3-839e-4912-89a8-2ceae9c2bb09,Namespace:kube-system,Attempt:0,}" Mar 20 21:27:28.756637 kubelet[2752]: I0320 21:27:28.755167 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8gcdq" podStartSLOduration=10.382400189 podStartE2EDuration="27.755148121s" podCreationTimestamp="2025-03-20 21:27:01 +0000 UTC" firstStartedPulling="2025-03-20 21:27:04.413799607 +0000 UTC m=+17.945522019" lastFinishedPulling="2025-03-20 21:27:21.786547539 +0000 UTC m=+35.318269951" observedRunningTime="2025-03-20 21:27:28.751444735 +0000 UTC m=+42.283167167" watchObservedRunningTime="2025-03-20 21:27:28.755148121 +0000 UTC m=+42.286870533" Mar 20 21:27:29.686840 systemd[1]: Started sshd@11-10.0.0.79:22-10.0.0.1:47332.service - OpenSSH per-connection server daemon (10.0.0.1:47332). Mar 20 21:27:29.747516 sshd[3588]: Accepted publickey for core from 10.0.0.1 port 47332 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:27:29.749279 sshd-session[3588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:29.753841 systemd-logind[1483]: New session 12 of user core. Mar 20 21:27:29.761501 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 20 21:27:29.887553 sshd[3590]: Connection closed by 10.0.0.1 port 47332 Mar 20 21:27:29.887888 sshd-session[3588]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:29.891590 systemd[1]: sshd@11-10.0.0.79:22-10.0.0.1:47332.service: Deactivated successfully. Mar 20 21:27:29.893721 systemd[1]: session-12.scope: Deactivated successfully. Mar 20 21:27:29.894543 systemd-logind[1483]: Session 12 logged out. Waiting for processes to exit. Mar 20 21:27:29.895756 systemd-logind[1483]: Removed session 12. Mar 20 21:27:30.017687 systemd-networkd[1432]: cilium_host: Link UP Mar 20 21:27:30.017901 systemd-networkd[1432]: cilium_net: Link UP Mar 20 21:27:30.020379 systemd-networkd[1432]: cilium_net: Gained carrier Mar 20 21:27:30.022394 systemd-networkd[1432]: cilium_host: Gained carrier Mar 20 21:27:30.022587 systemd-networkd[1432]: cilium_net: Gained IPv6LL Mar 20 21:27:30.022783 systemd-networkd[1432]: cilium_host: Gained IPv6LL Mar 20 21:27:30.123106 systemd-networkd[1432]: cilium_vxlan: Link UP Mar 20 21:27:30.123118 systemd-networkd[1432]: cilium_vxlan: Gained carrier Mar 20 21:27:30.333292 kernel: NET: Registered PF_ALG protocol family Mar 20 21:27:31.024811 systemd-networkd[1432]: lxc_health: Link UP Mar 20 21:27:31.029672 systemd-networkd[1432]: lxc_health: Gained carrier Mar 20 21:27:31.452748 systemd-networkd[1432]: lxc68a52f1f382c: Link UP Mar 20 21:27:31.455000 systemd-networkd[1432]: lxc484a005849e8: Link UP Mar 20 21:27:31.464297 kernel: eth0: renamed from tmpaa95d Mar 20 21:27:31.467284 kernel: eth0: renamed from tmpa0e12 Mar 20 21:27:31.476416 systemd-networkd[1432]: lxc68a52f1f382c: Gained carrier Mar 20 21:27:31.478510 systemd-networkd[1432]: lxc484a005849e8: Gained carrier Mar 20 21:27:31.986401 systemd-networkd[1432]: cilium_vxlan: Gained IPv6LL Mar 20 21:27:32.751436 systemd-networkd[1432]: lxc68a52f1f382c: Gained IPv6LL Mar 20 21:27:32.943752 systemd-networkd[1432]: lxc_health: Gained IPv6LL Mar 20 21:27:33.391483 systemd-networkd[1432]: lxc484a005849e8: Gained IPv6LL Mar 20 21:27:34.851546 containerd[1501]: time="2025-03-20T21:27:34.851498253Z" level=info msg="connecting to shim aa95d3438b420019adeea818f3600136c716044ee2e67c5a37b79e4edcb343e2" address="unix:///run/containerd/s/92a31c3fdbc7f33eec25f6c56f9d6cac5266fcd882beec42436d4e7301bd687b" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:27:34.866128 containerd[1501]: time="2025-03-20T21:27:34.865681137Z" level=info msg="connecting to shim a0e12a4ed60a439b0b3209ab3eed99a5e16ec7d9d155a3746d67a41ba839600f" address="unix:///run/containerd/s/c57564ece2e51457f7a4bedb63dd3a2b6ef8ecd1919895d7aca1ec5e829b3ef6" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:27:34.883417 systemd[1]: Started cri-containerd-aa95d3438b420019adeea818f3600136c716044ee2e67c5a37b79e4edcb343e2.scope - libcontainer container aa95d3438b420019adeea818f3600136c716044ee2e67c5a37b79e4edcb343e2. Mar 20 21:27:34.887598 systemd[1]: Started cri-containerd-a0e12a4ed60a439b0b3209ab3eed99a5e16ec7d9d155a3746d67a41ba839600f.scope - libcontainer container a0e12a4ed60a439b0b3209ab3eed99a5e16ec7d9d155a3746d67a41ba839600f. Mar 20 21:27:34.894686 systemd[1]: Started sshd@12-10.0.0.79:22-10.0.0.1:47336.service - OpenSSH per-connection server daemon (10.0.0.1:47336). Mar 20 21:27:34.899007 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:27:34.910770 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:27:34.941044 containerd[1501]: time="2025-03-20T21:27:34.940981152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2k8hb,Uid:93040ed3-839e-4912-89a8-2ceae9c2bb09,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa95d3438b420019adeea818f3600136c716044ee2e67c5a37b79e4edcb343e2\"" Mar 20 21:27:34.943516 containerd[1501]: time="2025-03-20T21:27:34.943482328Z" level=info msg="CreateContainer within sandbox \"aa95d3438b420019adeea818f3600136c716044ee2e67c5a37b79e4edcb343e2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 21:27:34.956691 containerd[1501]: time="2025-03-20T21:27:34.956645277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c24w5,Uid:58229e44-87de-48f6-b300-a72606b0406e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0e12a4ed60a439b0b3209ab3eed99a5e16ec7d9d155a3746d67a41ba839600f\"" Mar 20 21:27:34.958057 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 47336 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:27:34.959585 containerd[1501]: time="2025-03-20T21:27:34.959433372Z" level=info msg="CreateContainer within sandbox \"a0e12a4ed60a439b0b3209ab3eed99a5e16ec7d9d155a3746d67a41ba839600f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 21:27:34.960559 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:34.968518 systemd-logind[1483]: New session 13 of user core. Mar 20 21:27:34.972108 containerd[1501]: time="2025-03-20T21:27:34.972058913Z" level=info msg="Container c882df2352331480e566cff01ec4fa6fcd0f34903172c173dcd36e7816e916ff: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:27:34.975436 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 20 21:27:34.977692 containerd[1501]: time="2025-03-20T21:27:34.977632154Z" level=info msg="Container e0aa3d748a6b99f1cffe9b0756fe84d997b89ca352d19903cd63b5e421cacc37: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:27:34.994224 containerd[1501]: time="2025-03-20T21:27:34.994159390Z" level=info msg="CreateContainer within sandbox \"aa95d3438b420019adeea818f3600136c716044ee2e67c5a37b79e4edcb343e2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c882df2352331480e566cff01ec4fa6fcd0f34903172c173dcd36e7816e916ff\"" Mar 20 21:27:34.994738 containerd[1501]: time="2025-03-20T21:27:34.994712328Z" level=info msg="StartContainer for \"c882df2352331480e566cff01ec4fa6fcd0f34903172c173dcd36e7816e916ff\"" Mar 20 21:27:34.995836 containerd[1501]: time="2025-03-20T21:27:34.995802715Z" level=info msg="connecting to shim c882df2352331480e566cff01ec4fa6fcd0f34903172c173dcd36e7816e916ff" address="unix:///run/containerd/s/92a31c3fdbc7f33eec25f6c56f9d6cac5266fcd882beec42436d4e7301bd687b" protocol=ttrpc version=3 Mar 20 21:27:35.002376 containerd[1501]: time="2025-03-20T21:27:35.002316583Z" level=info msg="CreateContainer within sandbox \"a0e12a4ed60a439b0b3209ab3eed99a5e16ec7d9d155a3746d67a41ba839600f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e0aa3d748a6b99f1cffe9b0756fe84d997b89ca352d19903cd63b5e421cacc37\"" Mar 20 21:27:35.003828 containerd[1501]: time="2025-03-20T21:27:35.003721861Z" level=info msg="StartContainer for \"e0aa3d748a6b99f1cffe9b0756fe84d997b89ca352d19903cd63b5e421cacc37\"" Mar 20 21:27:35.004781 containerd[1501]: time="2025-03-20T21:27:35.004753006Z" level=info msg="connecting to shim e0aa3d748a6b99f1cffe9b0756fe84d997b89ca352d19903cd63b5e421cacc37" address="unix:///run/containerd/s/c57564ece2e51457f7a4bedb63dd3a2b6ef8ecd1919895d7aca1ec5e829b3ef6" protocol=ttrpc version=3 Mar 20 21:27:35.014521 systemd[1]: Started cri-containerd-c882df2352331480e566cff01ec4fa6fcd0f34903172c173dcd36e7816e916ff.scope - libcontainer container c882df2352331480e566cff01ec4fa6fcd0f34903172c173dcd36e7816e916ff. Mar 20 21:27:35.029414 systemd[1]: Started cri-containerd-e0aa3d748a6b99f1cffe9b0756fe84d997b89ca352d19903cd63b5e421cacc37.scope - libcontainer container e0aa3d748a6b99f1cffe9b0756fe84d997b89ca352d19903cd63b5e421cacc37. Mar 20 21:27:35.056065 containerd[1501]: time="2025-03-20T21:27:35.055956800Z" level=info msg="StartContainer for \"c882df2352331480e566cff01ec4fa6fcd0f34903172c173dcd36e7816e916ff\" returns successfully" Mar 20 21:27:35.069147 containerd[1501]: time="2025-03-20T21:27:35.069104991Z" level=info msg="StartContainer for \"e0aa3d748a6b99f1cffe9b0756fe84d997b89ca352d19903cd63b5e421cacc37\" returns successfully" Mar 20 21:27:35.102446 sshd[4089]: Connection closed by 10.0.0.1 port 47336 Mar 20 21:27:35.104473 sshd-session[4074]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:35.116812 systemd[1]: sshd@12-10.0.0.79:22-10.0.0.1:47336.service: Deactivated successfully. Mar 20 21:27:35.119338 systemd[1]: session-13.scope: Deactivated successfully. Mar 20 21:27:35.121304 systemd-logind[1483]: Session 13 logged out. Waiting for processes to exit. Mar 20 21:27:35.123050 systemd[1]: Started sshd@13-10.0.0.79:22-10.0.0.1:47338.service - OpenSSH per-connection server daemon (10.0.0.1:47338). Mar 20 21:27:35.124205 systemd-logind[1483]: Removed session 13. Mar 20 21:27:35.169864 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 47338 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:27:35.171579 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:35.176079 systemd-logind[1483]: New session 14 of user core. Mar 20 21:27:35.192473 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 20 21:27:35.349381 sshd[4167]: Connection closed by 10.0.0.1 port 47338 Mar 20 21:27:35.349897 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:35.363513 systemd[1]: sshd@13-10.0.0.79:22-10.0.0.1:47338.service: Deactivated successfully. Mar 20 21:27:35.366017 systemd[1]: session-14.scope: Deactivated successfully. Mar 20 21:27:35.367140 systemd-logind[1483]: Session 14 logged out. Waiting for processes to exit. Mar 20 21:27:35.373090 systemd[1]: Started sshd@14-10.0.0.79:22-10.0.0.1:47348.service - OpenSSH per-connection server daemon (10.0.0.1:47348). Mar 20 21:27:35.375161 systemd-logind[1483]: Removed session 14. Mar 20 21:27:35.426778 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 47348 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:27:35.428881 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:35.434037 systemd-logind[1483]: New session 15 of user core. Mar 20 21:27:35.447458 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 20 21:27:35.570249 sshd[4184]: Connection closed by 10.0.0.1 port 47348 Mar 20 21:27:35.570638 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:35.574252 systemd[1]: sshd@14-10.0.0.79:22-10.0.0.1:47348.service: Deactivated successfully. Mar 20 21:27:35.577923 systemd[1]: session-15.scope: Deactivated successfully. Mar 20 21:27:35.579973 systemd-logind[1483]: Session 15 logged out. Waiting for processes to exit. Mar 20 21:27:35.581429 systemd-logind[1483]: Removed session 15. Mar 20 21:27:35.781061 kubelet[2752]: I0320 21:27:35.780457 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2k8hb" podStartSLOduration=33.780438707 podStartE2EDuration="33.780438707s" podCreationTimestamp="2025-03-20 21:27:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:27:35.779095666 +0000 UTC m=+49.310818078" watchObservedRunningTime="2025-03-20 21:27:35.780438707 +0000 UTC m=+49.312161119" Mar 20 21:27:35.781061 kubelet[2752]: I0320 21:27:35.780826 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-c24w5" podStartSLOduration=32.780820724 podStartE2EDuration="32.780820724s" podCreationTimestamp="2025-03-20 21:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:27:35.769244384 +0000 UTC m=+49.300966866" watchObservedRunningTime="2025-03-20 21:27:35.780820724 +0000 UTC m=+49.312543136" Mar 20 21:27:40.586526 systemd[1]: Started sshd@15-10.0.0.79:22-10.0.0.1:38754.service - OpenSSH per-connection server daemon (10.0.0.1:38754). Mar 20 21:27:40.639710 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 38754 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:27:40.641325 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:40.645498 systemd-logind[1483]: New session 16 of user core. Mar 20 21:27:40.653448 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 20 21:27:40.770485 sshd[4209]: Connection closed by 10.0.0.1 port 38754 Mar 20 21:27:40.770841 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:40.775725 systemd[1]: sshd@15-10.0.0.79:22-10.0.0.1:38754.service: Deactivated successfully. Mar 20 21:27:40.777883 systemd[1]: session-16.scope: Deactivated successfully. Mar 20 21:27:40.778583 systemd-logind[1483]: Session 16 logged out. Waiting for processes to exit. Mar 20 21:27:40.779660 systemd-logind[1483]: Removed session 16. Mar 20 21:27:45.786303 systemd[1]: Started sshd@16-10.0.0.79:22-10.0.0.1:58356.service - OpenSSH per-connection server daemon (10.0.0.1:58356). Mar 20 21:27:45.841222 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 58356 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:27:45.842604 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:45.846436 systemd-logind[1483]: New session 17 of user core. Mar 20 21:27:45.856372 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 20 21:27:45.960447 sshd[4225]: Connection closed by 10.0.0.1 port 58356 Mar 20 21:27:45.960902 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:45.971072 systemd[1]: sshd@16-10.0.0.79:22-10.0.0.1:58356.service: Deactivated successfully. Mar 20 21:27:45.973029 systemd[1]: session-17.scope: Deactivated successfully. Mar 20 21:27:45.974518 systemd-logind[1483]: Session 17 logged out. Waiting for processes to exit. Mar 20 21:27:45.975858 systemd[1]: Started sshd@17-10.0.0.79:22-10.0.0.1:58368.service - OpenSSH per-connection server daemon (10.0.0.1:58368). Mar 20 21:27:45.976790 systemd-logind[1483]: Removed session 17. Mar 20 21:27:46.026278 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 58368 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:27:46.027759 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:46.031899 systemd-logind[1483]: New session 18 of user core. Mar 20 21:27:46.042418 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 20 21:27:46.281317 sshd[4240]: Connection closed by 10.0.0.1 port 58368 Mar 20 21:27:46.281715 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:46.290038 systemd[1]: sshd@17-10.0.0.79:22-10.0.0.1:58368.service: Deactivated successfully. Mar 20 21:27:46.292192 systemd[1]: session-18.scope: Deactivated successfully. Mar 20 21:27:46.293841 systemd-logind[1483]: Session 18 logged out. Waiting for processes to exit. Mar 20 21:27:46.295484 systemd[1]: Started sshd@18-10.0.0.79:22-10.0.0.1:58380.service - OpenSSH per-connection server daemon (10.0.0.1:58380). Mar 20 21:27:46.296346 systemd-logind[1483]: Removed session 18. Mar 20 21:27:46.348643 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 58380 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:27:46.350166 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:46.354734 systemd-logind[1483]: New session 19 of user core. Mar 20 21:27:46.364440 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 20 21:27:47.645774 sshd[4253]: Connection closed by 10.0.0.1 port 58380 Mar 20 21:27:47.646321 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:47.656485 systemd[1]: sshd@18-10.0.0.79:22-10.0.0.1:58380.service: Deactivated successfully. Mar 20 21:27:47.658578 systemd[1]: session-19.scope: Deactivated successfully. Mar 20 21:27:47.660480 systemd-logind[1483]: Session 19 logged out. Waiting for processes to exit. Mar 20 21:27:47.661839 systemd[1]: Started sshd@19-10.0.0.79:22-10.0.0.1:58386.service - OpenSSH per-connection server daemon (10.0.0.1:58386). Mar 20 21:27:47.662760 systemd-logind[1483]: Removed session 19. Mar 20 21:27:47.712016 sshd[4276]: Accepted publickey for core from 10.0.0.1 port 58386 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:27:47.713615 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:47.718076 systemd-logind[1483]: New session 20 of user core. Mar 20 21:27:47.731366 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 20 21:27:48.308224 sshd[4279]: Connection closed by 10.0.0.1 port 58386 Mar 20 21:27:48.308682 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:48.318129 systemd[1]: sshd@19-10.0.0.79:22-10.0.0.1:58386.service: Deactivated successfully. Mar 20 21:27:48.320036 systemd[1]: session-20.scope: Deactivated successfully. Mar 20 21:27:48.321679 systemd-logind[1483]: Session 20 logged out. Waiting for processes to exit. Mar 20 21:27:48.323297 systemd[1]: Started sshd@20-10.0.0.79:22-10.0.0.1:58398.service - OpenSSH per-connection server daemon (10.0.0.1:58398). Mar 20 21:27:48.324076 systemd-logind[1483]: Removed session 20. Mar 20 21:27:48.372425 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 58398 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:27:48.373898 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:48.378580 systemd-logind[1483]: New session 21 of user core. Mar 20 21:27:48.389436 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 20 21:27:48.496439 sshd[4293]: Connection closed by 10.0.0.1 port 58398 Mar 20 21:27:48.496759 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:48.500741 systemd[1]: sshd@20-10.0.0.79:22-10.0.0.1:58398.service: Deactivated successfully. Mar 20 21:27:48.502814 systemd[1]: session-21.scope: Deactivated successfully. Mar 20 21:27:48.503467 systemd-logind[1483]: Session 21 logged out. Waiting for processes to exit. Mar 20 21:27:48.504324 systemd-logind[1483]: Removed session 21. Mar 20 21:27:53.508956 systemd[1]: Started sshd@21-10.0.0.79:22-10.0.0.1:58404.service - OpenSSH per-connection server daemon (10.0.0.1:58404). Mar 20 21:27:53.561958 sshd[4309]: Accepted publickey for core from 10.0.0.1 port 58404 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:27:53.563323 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:53.567180 systemd-logind[1483]: New session 22 of user core. Mar 20 21:27:53.582366 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 20 21:27:53.685507 sshd[4311]: Connection closed by 10.0.0.1 port 58404 Mar 20 21:27:53.685828 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:53.689934 systemd[1]: sshd@21-10.0.0.79:22-10.0.0.1:58404.service: Deactivated successfully. Mar 20 21:27:53.692017 systemd[1]: session-22.scope: Deactivated successfully. Mar 20 21:27:53.692699 systemd-logind[1483]: Session 22 logged out. Waiting for processes to exit. Mar 20 21:27:53.693697 systemd-logind[1483]: Removed session 22. Mar 20 21:27:58.703314 systemd[1]: Started sshd@22-10.0.0.79:22-10.0.0.1:37752.service - OpenSSH per-connection server daemon (10.0.0.1:37752). Mar 20 21:27:58.751234 sshd[4325]: Accepted publickey for core from 10.0.0.1 port 37752 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:27:58.752550 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:27:58.756402 systemd-logind[1483]: New session 23 of user core. Mar 20 21:27:58.766371 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 20 21:27:58.870993 sshd[4327]: Connection closed by 10.0.0.1 port 37752 Mar 20 21:27:58.871339 sshd-session[4325]: pam_unix(sshd:session): session closed for user core Mar 20 21:27:58.875228 systemd[1]: sshd@22-10.0.0.79:22-10.0.0.1:37752.service: Deactivated successfully. Mar 20 21:27:58.877289 systemd[1]: session-23.scope: Deactivated successfully. Mar 20 21:27:58.878158 systemd-logind[1483]: Session 23 logged out. Waiting for processes to exit. Mar 20 21:27:58.879045 systemd-logind[1483]: Removed session 23. Mar 20 21:28:03.883795 systemd[1]: Started sshd@23-10.0.0.79:22-10.0.0.1:37766.service - OpenSSH per-connection server daemon (10.0.0.1:37766). Mar 20 21:28:03.935167 sshd[4342]: Accepted publickey for core from 10.0.0.1 port 37766 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:28:03.936569 sshd-session[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:28:03.940561 systemd-logind[1483]: New session 24 of user core. Mar 20 21:28:03.948367 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 20 21:28:04.049354 sshd[4344]: Connection closed by 10.0.0.1 port 37766 Mar 20 21:28:04.049668 sshd-session[4342]: pam_unix(sshd:session): session closed for user core Mar 20 21:28:04.063211 systemd[1]: sshd@23-10.0.0.79:22-10.0.0.1:37766.service: Deactivated successfully. Mar 20 21:28:04.065118 systemd[1]: session-24.scope: Deactivated successfully. Mar 20 21:28:04.066534 systemd-logind[1483]: Session 24 logged out. Waiting for processes to exit. Mar 20 21:28:04.067920 systemd[1]: Started sshd@24-10.0.0.79:22-10.0.0.1:37768.service - OpenSSH per-connection server daemon (10.0.0.1:37768). Mar 20 21:28:04.069024 systemd-logind[1483]: Removed session 24. Mar 20 21:28:04.124436 sshd[4356]: Accepted publickey for core from 10.0.0.1 port 37768 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:28:04.125848 sshd-session[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:28:04.130404 systemd-logind[1483]: New session 25 of user core. Mar 20 21:28:04.138388 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 20 21:28:04.648446 kubelet[2752]: E0320 21:28:04.648394 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:05.690733 containerd[1501]: time="2025-03-20T21:28:05.690690898Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8\" id:\"2899622ccbe87d49cdfb1542f322aaac89537c4521591dab2f4c97ff25850a40\" pid:4380 exited_at:{seconds:1742506085 nanos:690386635}" Mar 20 21:28:05.692544 containerd[1501]: time="2025-03-20T21:28:05.692522332Z" level=info msg="StopContainer for \"c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8\" with timeout 2 (s)" Mar 20 21:28:05.692834 containerd[1501]: time="2025-03-20T21:28:05.692816717Z" level=info msg="Stop container \"c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8\" with signal terminated" Mar 20 21:28:05.697782 containerd[1501]: time="2025-03-20T21:28:05.697748309Z" level=info msg="StopContainer for \"b83c388802ac5937a8fe0dbe600f1dae0739815ae68ffe5138ce276e2b08dc41\" with timeout 30 (s)" Mar 20 21:28:05.703698 containerd[1501]: time="2025-03-20T21:28:05.701043171Z" level=info msg="Stop container \"b83c388802ac5937a8fe0dbe600f1dae0739815ae68ffe5138ce276e2b08dc41\" with signal terminated" Mar 20 21:28:05.703698 containerd[1501]: time="2025-03-20T21:28:05.702332775Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 21:28:05.703442 systemd-networkd[1432]: lxc_health: Link DOWN Mar 20 21:28:05.703447 systemd-networkd[1432]: lxc_health: Lost carrier Mar 20 21:28:05.717725 systemd[1]: cri-containerd-b83c388802ac5937a8fe0dbe600f1dae0739815ae68ffe5138ce276e2b08dc41.scope: Deactivated successfully. Mar 20 21:28:05.718521 containerd[1501]: time="2025-03-20T21:28:05.717727473Z" level=info msg="received exit event container_id:\"b83c388802ac5937a8fe0dbe600f1dae0739815ae68ffe5138ce276e2b08dc41\" id:\"b83c388802ac5937a8fe0dbe600f1dae0739815ae68ffe5138ce276e2b08dc41\" pid:3164 exited_at:{seconds:1742506085 nanos:717462404}" Mar 20 21:28:05.718521 containerd[1501]: time="2025-03-20T21:28:05.717836272Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b83c388802ac5937a8fe0dbe600f1dae0739815ae68ffe5138ce276e2b08dc41\" id:\"b83c388802ac5937a8fe0dbe600f1dae0739815ae68ffe5138ce276e2b08dc41\" pid:3164 exited_at:{seconds:1742506085 nanos:717462404}" Mar 20 21:28:05.719435 systemd[1]: cri-containerd-c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8.scope: Deactivated successfully. Mar 20 21:28:05.719881 systemd[1]: cri-containerd-c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8.scope: Consumed 6.781s CPU time, 126.4M memory peak, 180K read from disk, 13.3M written to disk. Mar 20 21:28:05.720916 containerd[1501]: time="2025-03-20T21:28:05.720720375Z" level=info msg="received exit event container_id:\"c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8\" id:\"c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8\" pid:3458 exited_at:{seconds:1742506085 nanos:720582891}" Mar 20 21:28:05.720916 containerd[1501]: time="2025-03-20T21:28:05.720810428Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8\" id:\"c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8\" pid:3458 exited_at:{seconds:1742506085 nanos:720582891}" Mar 20 21:28:05.740833 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8-rootfs.mount: Deactivated successfully. Mar 20 21:28:05.740954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b83c388802ac5937a8fe0dbe600f1dae0739815ae68ffe5138ce276e2b08dc41-rootfs.mount: Deactivated successfully. Mar 20 21:28:05.789073 containerd[1501]: time="2025-03-20T21:28:05.788918756Z" level=info msg="StopContainer for \"c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8\" returns successfully" Mar 20 21:28:05.789859 containerd[1501]: time="2025-03-20T21:28:05.789820696Z" level=info msg="StopContainer for \"b83c388802ac5937a8fe0dbe600f1dae0739815ae68ffe5138ce276e2b08dc41\" returns successfully" Mar 20 21:28:05.790447 containerd[1501]: time="2025-03-20T21:28:05.790419685Z" level=info msg="StopPodSandbox for \"24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e\"" Mar 20 21:28:05.790568 containerd[1501]: time="2025-03-20T21:28:05.790430576Z" level=info msg="StopPodSandbox for \"a69dac36ec00b9f0364bbadc0e9a25f4b86ae2dee35d93e55fa18f0f92f0d3c1\"" Mar 20 21:28:05.796337 containerd[1501]: time="2025-03-20T21:28:05.796295088Z" level=info msg="Container to stop \"e63b4dca13dbf6fe41691bcae48add57404481b161d02b8696c61c018e3edd01\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:28:05.796337 containerd[1501]: time="2025-03-20T21:28:05.796327841Z" level=info msg="Container to stop \"c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:28:05.796407 containerd[1501]: time="2025-03-20T21:28:05.796338040Z" level=info msg="Container to stop \"cd2658d5ca5eb7ac620f99e95fd7f72aa72ad7a269928a716aa95fec786aade1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:28:05.796407 containerd[1501]: time="2025-03-20T21:28:05.796347359Z" level=info msg="Container to stop \"77662f9ceb73714de4a91b3afb8885f3a821cad3481a32e3256d5b97beed5761\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:28:05.796407 containerd[1501]: time="2025-03-20T21:28:05.796355744Z" level=info msg="Container to stop \"0c2276917a0c449ac93a17aaba16f47109978dd3cb1847c4f2418d0d3a00a6b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:28:05.797294 containerd[1501]: time="2025-03-20T21:28:05.797249749Z" level=info msg="Container to stop \"b83c388802ac5937a8fe0dbe600f1dae0739815ae68ffe5138ce276e2b08dc41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:28:05.803470 systemd[1]: cri-containerd-24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e.scope: Deactivated successfully. Mar 20 21:28:05.805110 containerd[1501]: time="2025-03-20T21:28:05.805071827Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e\" id:\"24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e\" pid:2964 exit_status:137 exited_at:{seconds:1742506085 nanos:803790108}" Mar 20 21:28:05.805456 systemd[1]: cri-containerd-a69dac36ec00b9f0364bbadc0e9a25f4b86ae2dee35d93e55fa18f0f92f0d3c1.scope: Deactivated successfully. Mar 20 21:28:05.828789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a69dac36ec00b9f0364bbadc0e9a25f4b86ae2dee35d93e55fa18f0f92f0d3c1-rootfs.mount: Deactivated successfully. Mar 20 21:28:05.832878 containerd[1501]: time="2025-03-20T21:28:05.832836970Z" level=info msg="shim disconnected" id=a69dac36ec00b9f0364bbadc0e9a25f4b86ae2dee35d93e55fa18f0f92f0d3c1 namespace=k8s.io Mar 20 21:28:05.832878 containerd[1501]: time="2025-03-20T21:28:05.832868039Z" level=warning msg="cleaning up after shim disconnected" id=a69dac36ec00b9f0364bbadc0e9a25f4b86ae2dee35d93e55fa18f0f92f0d3c1 namespace=k8s.io Mar 20 21:28:05.835319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e-rootfs.mount: Deactivated successfully. Mar 20 21:28:05.840678 containerd[1501]: time="2025-03-20T21:28:05.832893257Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 20 21:28:05.840678 containerd[1501]: time="2025-03-20T21:28:05.838505245Z" level=info msg="shim disconnected" id=24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e namespace=k8s.io Mar 20 21:28:05.840678 containerd[1501]: time="2025-03-20T21:28:05.840663075Z" level=warning msg="cleaning up after shim disconnected" id=24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e namespace=k8s.io Mar 20 21:28:05.840978 containerd[1501]: time="2025-03-20T21:28:05.840671852Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 20 21:28:05.859735 containerd[1501]: time="2025-03-20T21:28:05.859650846Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a69dac36ec00b9f0364bbadc0e9a25f4b86ae2dee35d93e55fa18f0f92f0d3c1\" id:\"a69dac36ec00b9f0364bbadc0e9a25f4b86ae2dee35d93e55fa18f0f92f0d3c1\" pid:2878 exit_status:137 exited_at:{seconds:1742506085 nanos:806350751}" Mar 20 21:28:05.862995 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e-shm.mount: Deactivated successfully. Mar 20 21:28:05.863125 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a69dac36ec00b9f0364bbadc0e9a25f4b86ae2dee35d93e55fa18f0f92f0d3c1-shm.mount: Deactivated successfully. Mar 20 21:28:05.870075 containerd[1501]: time="2025-03-20T21:28:05.870026744Z" level=info msg="TearDown network for sandbox \"24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e\" successfully" Mar 20 21:28:05.870075 containerd[1501]: time="2025-03-20T21:28:05.870061310Z" level=info msg="StopPodSandbox for \"24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e\" returns successfully" Mar 20 21:28:05.872474 containerd[1501]: time="2025-03-20T21:28:05.872443131Z" level=info msg="TearDown network for sandbox \"a69dac36ec00b9f0364bbadc0e9a25f4b86ae2dee35d93e55fa18f0f92f0d3c1\" successfully" Mar 20 21:28:05.872474 containerd[1501]: time="2025-03-20T21:28:05.872471395Z" level=info msg="StopPodSandbox for \"a69dac36ec00b9f0364bbadc0e9a25f4b86ae2dee35d93e55fa18f0f92f0d3c1\" returns successfully" Mar 20 21:28:05.877145 containerd[1501]: time="2025-03-20T21:28:05.877102140Z" level=info msg="received exit event sandbox_id:\"a69dac36ec00b9f0364bbadc0e9a25f4b86ae2dee35d93e55fa18f0f92f0d3c1\" exit_status:137 exited_at:{seconds:1742506085 nanos:806350751}" Mar 20 21:28:05.877436 containerd[1501]: time="2025-03-20T21:28:05.877405442Z" level=info msg="received exit event sandbox_id:\"24e26770432ded7d9a386cd38a39bf03dc001bff8a31b9552e1d22d48379dc2e\" exit_status:137 exited_at:{seconds:1742506085 nanos:803790108}" Mar 20 21:28:06.067818 kubelet[2752]: I0320 21:28:06.067665 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-cni-path\") pod \"552218fd-bedf-4096-aa60-95b93cda75a6\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " Mar 20 21:28:06.067818 kubelet[2752]: I0320 21:28:06.067718 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-host-proc-sys-kernel\") pod \"552218fd-bedf-4096-aa60-95b93cda75a6\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " Mar 20 21:28:06.067818 kubelet[2752]: I0320 21:28:06.067740 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fecb29e0-de4f-4ce6-8305-c177206400d6-cilium-config-path\") pod \"fecb29e0-de4f-4ce6-8305-c177206400d6\" (UID: \"fecb29e0-de4f-4ce6-8305-c177206400d6\") " Mar 20 21:28:06.067818 kubelet[2752]: I0320 21:28:06.067756 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/552218fd-bedf-4096-aa60-95b93cda75a6-hubble-tls\") pod \"552218fd-bedf-4096-aa60-95b93cda75a6\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " Mar 20 21:28:06.067818 kubelet[2752]: I0320 21:28:06.067768 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-etc-cni-netd\") pod \"552218fd-bedf-4096-aa60-95b93cda75a6\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " Mar 20 21:28:06.067818 kubelet[2752]: I0320 21:28:06.067780 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-cilium-run\") pod \"552218fd-bedf-4096-aa60-95b93cda75a6\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " Mar 20 21:28:06.068442 kubelet[2752]: I0320 21:28:06.067792 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-xtables-lock\") pod \"552218fd-bedf-4096-aa60-95b93cda75a6\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " Mar 20 21:28:06.068442 kubelet[2752]: I0320 21:28:06.067807 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqssb\" (UniqueName: \"kubernetes.io/projected/fecb29e0-de4f-4ce6-8305-c177206400d6-kube-api-access-rqssb\") pod \"fecb29e0-de4f-4ce6-8305-c177206400d6\" (UID: \"fecb29e0-de4f-4ce6-8305-c177206400d6\") " Mar 20 21:28:06.068442 kubelet[2752]: I0320 21:28:06.067822 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/552218fd-bedf-4096-aa60-95b93cda75a6-clustermesh-secrets\") pod \"552218fd-bedf-4096-aa60-95b93cda75a6\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " Mar 20 21:28:06.068442 kubelet[2752]: I0320 21:28:06.067836 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-hostproc\") pod \"552218fd-bedf-4096-aa60-95b93cda75a6\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " Mar 20 21:28:06.068442 kubelet[2752]: I0320 21:28:06.067849 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-bpf-maps\") pod \"552218fd-bedf-4096-aa60-95b93cda75a6\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " Mar 20 21:28:06.068442 kubelet[2752]: I0320 21:28:06.067860 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-host-proc-sys-net\") pod \"552218fd-bedf-4096-aa60-95b93cda75a6\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " Mar 20 21:28:06.068609 kubelet[2752]: I0320 21:28:06.067876 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-lib-modules\") pod \"552218fd-bedf-4096-aa60-95b93cda75a6\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " Mar 20 21:28:06.068609 kubelet[2752]: I0320 21:28:06.067890 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/552218fd-bedf-4096-aa60-95b93cda75a6-cilium-config-path\") pod \"552218fd-bedf-4096-aa60-95b93cda75a6\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " Mar 20 21:28:06.068609 kubelet[2752]: I0320 21:28:06.067903 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6dmt\" (UniqueName: \"kubernetes.io/projected/552218fd-bedf-4096-aa60-95b93cda75a6-kube-api-access-k6dmt\") pod \"552218fd-bedf-4096-aa60-95b93cda75a6\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " Mar 20 21:28:06.068609 kubelet[2752]: I0320 21:28:06.067916 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-cilium-cgroup\") pod \"552218fd-bedf-4096-aa60-95b93cda75a6\" (UID: \"552218fd-bedf-4096-aa60-95b93cda75a6\") " Mar 20 21:28:06.068609 kubelet[2752]: I0320 21:28:06.067679 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-cni-path" (OuterVolumeSpecName: "cni-path") pod "552218fd-bedf-4096-aa60-95b93cda75a6" (UID: "552218fd-bedf-4096-aa60-95b93cda75a6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:28:06.068747 kubelet[2752]: I0320 21:28:06.067953 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "552218fd-bedf-4096-aa60-95b93cda75a6" (UID: "552218fd-bedf-4096-aa60-95b93cda75a6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:28:06.068747 kubelet[2752]: I0320 21:28:06.068239 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-hostproc" (OuterVolumeSpecName: "hostproc") pod "552218fd-bedf-4096-aa60-95b93cda75a6" (UID: "552218fd-bedf-4096-aa60-95b93cda75a6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:28:06.068747 kubelet[2752]: I0320 21:28:06.068300 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "552218fd-bedf-4096-aa60-95b93cda75a6" (UID: "552218fd-bedf-4096-aa60-95b93cda75a6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:28:06.068747 kubelet[2752]: I0320 21:28:06.068318 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "552218fd-bedf-4096-aa60-95b93cda75a6" (UID: "552218fd-bedf-4096-aa60-95b93cda75a6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:28:06.068747 kubelet[2752]: I0320 21:28:06.068336 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "552218fd-bedf-4096-aa60-95b93cda75a6" (UID: "552218fd-bedf-4096-aa60-95b93cda75a6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:28:06.071812 kubelet[2752]: I0320 21:28:06.071779 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fecb29e0-de4f-4ce6-8305-c177206400d6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fecb29e0-de4f-4ce6-8305-c177206400d6" (UID: "fecb29e0-de4f-4ce6-8305-c177206400d6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 21:28:06.071937 kubelet[2752]: I0320 21:28:06.071869 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "552218fd-bedf-4096-aa60-95b93cda75a6" (UID: "552218fd-bedf-4096-aa60-95b93cda75a6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:28:06.071937 kubelet[2752]: I0320 21:28:06.071895 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "552218fd-bedf-4096-aa60-95b93cda75a6" (UID: "552218fd-bedf-4096-aa60-95b93cda75a6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:28:06.071937 kubelet[2752]: I0320 21:28:06.071909 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "552218fd-bedf-4096-aa60-95b93cda75a6" (UID: "552218fd-bedf-4096-aa60-95b93cda75a6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:28:06.071937 kubelet[2752]: I0320 21:28:06.071921 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "552218fd-bedf-4096-aa60-95b93cda75a6" (UID: "552218fd-bedf-4096-aa60-95b93cda75a6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:28:06.075200 kubelet[2752]: I0320 21:28:06.075143 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/552218fd-bedf-4096-aa60-95b93cda75a6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "552218fd-bedf-4096-aa60-95b93cda75a6" (UID: "552218fd-bedf-4096-aa60-95b93cda75a6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 21:28:06.075927 systemd[1]: var-lib-kubelet-pods-552218fd\x2dbedf\x2d4096\x2daa60\x2d95b93cda75a6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 20 21:28:06.076087 kubelet[2752]: I0320 21:28:06.076073 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552218fd-bedf-4096-aa60-95b93cda75a6-kube-api-access-k6dmt" (OuterVolumeSpecName: "kube-api-access-k6dmt") pod "552218fd-bedf-4096-aa60-95b93cda75a6" (UID: "552218fd-bedf-4096-aa60-95b93cda75a6"). InnerVolumeSpecName "kube-api-access-k6dmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 21:28:06.076830 kubelet[2752]: I0320 21:28:06.076791 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/552218fd-bedf-4096-aa60-95b93cda75a6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "552218fd-bedf-4096-aa60-95b93cda75a6" (UID: "552218fd-bedf-4096-aa60-95b93cda75a6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 21:28:06.077059 kubelet[2752]: I0320 21:28:06.076990 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552218fd-bedf-4096-aa60-95b93cda75a6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "552218fd-bedf-4096-aa60-95b93cda75a6" (UID: "552218fd-bedf-4096-aa60-95b93cda75a6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 21:28:06.077202 kubelet[2752]: I0320 21:28:06.077175 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fecb29e0-de4f-4ce6-8305-c177206400d6-kube-api-access-rqssb" (OuterVolumeSpecName: "kube-api-access-rqssb") pod "fecb29e0-de4f-4ce6-8305-c177206400d6" (UID: "fecb29e0-de4f-4ce6-8305-c177206400d6"). InnerVolumeSpecName "kube-api-access-rqssb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 21:28:06.168757 kubelet[2752]: I0320 21:28:06.168693 2752 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/552218fd-bedf-4096-aa60-95b93cda75a6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 20 21:28:06.168757 kubelet[2752]: I0320 21:28:06.168728 2752 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 20 21:28:06.168757 kubelet[2752]: I0320 21:28:06.168741 2752 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 20 21:28:06.168757 kubelet[2752]: I0320 21:28:06.168752 2752 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 20 21:28:06.168757 kubelet[2752]: I0320 21:28:06.168763 2752 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 20 21:28:06.168757 kubelet[2752]: I0320 21:28:06.168775 2752 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 20 21:28:06.169055 kubelet[2752]: I0320 21:28:06.168787 2752 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/552218fd-bedf-4096-aa60-95b93cda75a6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 20 21:28:06.169055 kubelet[2752]: I0320 21:28:06.168799 2752 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-k6dmt\" (UniqueName: \"kubernetes.io/projected/552218fd-bedf-4096-aa60-95b93cda75a6-kube-api-access-k6dmt\") on node \"localhost\" DevicePath \"\"" Mar 20 21:28:06.169055 kubelet[2752]: I0320 21:28:06.168808 2752 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 20 21:28:06.169055 kubelet[2752]: I0320 21:28:06.168817 2752 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fecb29e0-de4f-4ce6-8305-c177206400d6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 20 21:28:06.169055 kubelet[2752]: I0320 21:28:06.168824 2752 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 20 21:28:06.169055 kubelet[2752]: I0320 21:28:06.168833 2752 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 20 21:28:06.169055 kubelet[2752]: I0320 21:28:06.168840 2752 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/552218fd-bedf-4096-aa60-95b93cda75a6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 20 21:28:06.169055 kubelet[2752]: I0320 21:28:06.168848 2752 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 20 21:28:06.169247 kubelet[2752]: I0320 21:28:06.168855 2752 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/552218fd-bedf-4096-aa60-95b93cda75a6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 20 21:28:06.169247 kubelet[2752]: I0320 21:28:06.168863 2752 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rqssb\" (UniqueName: \"kubernetes.io/projected/fecb29e0-de4f-4ce6-8305-c177206400d6-kube-api-access-rqssb\") on node \"localhost\" DevicePath \"\"" Mar 20 21:28:06.656871 systemd[1]: Removed slice kubepods-besteffort-podfecb29e0_de4f_4ce6_8305_c177206400d6.slice - libcontainer container kubepods-besteffort-podfecb29e0_de4f_4ce6_8305_c177206400d6.slice. Mar 20 21:28:06.658104 systemd[1]: Removed slice kubepods-burstable-pod552218fd_bedf_4096_aa60_95b93cda75a6.slice - libcontainer container kubepods-burstable-pod552218fd_bedf_4096_aa60_95b93cda75a6.slice. Mar 20 21:28:06.658225 systemd[1]: kubepods-burstable-pod552218fd_bedf_4096_aa60_95b93cda75a6.slice: Consumed 6.898s CPU time, 126.7M memory peak, 216K read from disk, 13.3M written to disk. Mar 20 21:28:06.693511 kubelet[2752]: E0320 21:28:06.693479 2752 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 20 21:28:06.740788 systemd[1]: var-lib-kubelet-pods-552218fd\x2dbedf\x2d4096\x2daa60\x2d95b93cda75a6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk6dmt.mount: Deactivated successfully. Mar 20 21:28:06.740955 systemd[1]: var-lib-kubelet-pods-fecb29e0\x2dde4f\x2d4ce6\x2d8305\x2dc177206400d6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drqssb.mount: Deactivated successfully. Mar 20 21:28:06.741068 systemd[1]: var-lib-kubelet-pods-552218fd\x2dbedf\x2d4096\x2daa60\x2d95b93cda75a6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 20 21:28:06.819074 kubelet[2752]: I0320 21:28:06.818651 2752 scope.go:117] "RemoveContainer" containerID="c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8" Mar 20 21:28:06.823581 containerd[1501]: time="2025-03-20T21:28:06.823531068Z" level=info msg="RemoveContainer for \"c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8\"" Mar 20 21:28:06.833445 containerd[1501]: time="2025-03-20T21:28:06.833300231Z" level=info msg="RemoveContainer for \"c70e906aa4d105816fcac82f8a244abda414da6dd749ddecf1ac32083d0511a8\" returns successfully" Mar 20 21:28:06.833654 kubelet[2752]: I0320 21:28:06.833596 2752 scope.go:117] "RemoveContainer" containerID="e63b4dca13dbf6fe41691bcae48add57404481b161d02b8696c61c018e3edd01" Mar 20 21:28:06.835817 containerd[1501]: time="2025-03-20T21:28:06.835770006Z" level=info msg="RemoveContainer for \"e63b4dca13dbf6fe41691bcae48add57404481b161d02b8696c61c018e3edd01\"" Mar 20 21:28:06.841845 containerd[1501]: time="2025-03-20T21:28:06.841803075Z" level=info msg="RemoveContainer for \"e63b4dca13dbf6fe41691bcae48add57404481b161d02b8696c61c018e3edd01\" returns successfully" Mar 20 21:28:06.842020 kubelet[2752]: I0320 21:28:06.841993 2752 scope.go:117] "RemoveContainer" containerID="77662f9ceb73714de4a91b3afb8885f3a821cad3481a32e3256d5b97beed5761" Mar 20 21:28:06.844255 containerd[1501]: time="2025-03-20T21:28:06.844211654Z" level=info msg="RemoveContainer for \"77662f9ceb73714de4a91b3afb8885f3a821cad3481a32e3256d5b97beed5761\"" Mar 20 21:28:06.849726 containerd[1501]: time="2025-03-20T21:28:06.849689267Z" level=info msg="RemoveContainer for \"77662f9ceb73714de4a91b3afb8885f3a821cad3481a32e3256d5b97beed5761\" returns successfully" Mar 20 21:28:06.849925 kubelet[2752]: I0320 21:28:06.849898 2752 scope.go:117] "RemoveContainer" containerID="cd2658d5ca5eb7ac620f99e95fd7f72aa72ad7a269928a716aa95fec786aade1" Mar 20 21:28:06.851231 containerd[1501]: time="2025-03-20T21:28:06.851206266Z" level=info msg="RemoveContainer for \"cd2658d5ca5eb7ac620f99e95fd7f72aa72ad7a269928a716aa95fec786aade1\"" Mar 20 21:28:06.855288 containerd[1501]: time="2025-03-20T21:28:06.855242045Z" level=info msg="RemoveContainer for \"cd2658d5ca5eb7ac620f99e95fd7f72aa72ad7a269928a716aa95fec786aade1\" returns successfully" Mar 20 21:28:06.855476 kubelet[2752]: I0320 21:28:06.855438 2752 scope.go:117] "RemoveContainer" containerID="0c2276917a0c449ac93a17aaba16f47109978dd3cb1847c4f2418d0d3a00a6b2" Mar 20 21:28:06.856991 containerd[1501]: time="2025-03-20T21:28:06.856944138Z" level=info msg="RemoveContainer for \"0c2276917a0c449ac93a17aaba16f47109978dd3cb1847c4f2418d0d3a00a6b2\"" Mar 20 21:28:06.865692 containerd[1501]: time="2025-03-20T21:28:06.865638490Z" level=info msg="RemoveContainer for \"0c2276917a0c449ac93a17aaba16f47109978dd3cb1847c4f2418d0d3a00a6b2\" returns successfully" Mar 20 21:28:06.865849 kubelet[2752]: I0320 21:28:06.865816 2752 scope.go:117] "RemoveContainer" containerID="b83c388802ac5937a8fe0dbe600f1dae0739815ae68ffe5138ce276e2b08dc41" Mar 20 21:28:06.867249 containerd[1501]: time="2025-03-20T21:28:06.867210875Z" level=info msg="RemoveContainer for \"b83c388802ac5937a8fe0dbe600f1dae0739815ae68ffe5138ce276e2b08dc41\"" Mar 20 21:28:06.871949 containerd[1501]: time="2025-03-20T21:28:06.871892252Z" level=info msg="RemoveContainer for \"b83c388802ac5937a8fe0dbe600f1dae0739815ae68ffe5138ce276e2b08dc41\" returns successfully" Mar 20 21:28:07.438233 sshd[4359]: Connection closed by 10.0.0.1 port 37768 Mar 20 21:28:07.438653 sshd-session[4356]: pam_unix(sshd:session): session closed for user core Mar 20 21:28:07.447020 systemd[1]: sshd@24-10.0.0.79:22-10.0.0.1:37768.service: Deactivated successfully. Mar 20 21:28:07.448862 systemd[1]: session-25.scope: Deactivated successfully. Mar 20 21:28:07.450849 systemd-logind[1483]: Session 25 logged out. Waiting for processes to exit. Mar 20 21:28:07.452613 systemd[1]: Started sshd@25-10.0.0.79:22-10.0.0.1:45912.service - OpenSSH per-connection server daemon (10.0.0.1:45912). Mar 20 21:28:07.453508 systemd-logind[1483]: Removed session 25. Mar 20 21:28:07.503381 sshd[4510]: Accepted publickey for core from 10.0.0.1 port 45912 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:28:07.505222 sshd-session[4510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:28:07.509809 systemd-logind[1483]: New session 26 of user core. Mar 20 21:28:07.520379 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 20 21:28:07.968366 sshd[4513]: Connection closed by 10.0.0.1 port 45912 Mar 20 21:28:07.970930 sshd-session[4510]: pam_unix(sshd:session): session closed for user core Mar 20 21:28:07.983344 kubelet[2752]: I0320 21:28:07.983286 2752 topology_manager.go:215] "Topology Admit Handler" podUID="5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273" podNamespace="kube-system" podName="cilium-6nzrd" Mar 20 21:28:07.983344 kubelet[2752]: E0320 21:28:07.983341 2752 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="552218fd-bedf-4096-aa60-95b93cda75a6" containerName="clean-cilium-state" Mar 20 21:28:07.983773 kubelet[2752]: E0320 21:28:07.983352 2752 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="552218fd-bedf-4096-aa60-95b93cda75a6" containerName="cilium-agent" Mar 20 21:28:07.983773 kubelet[2752]: E0320 21:28:07.983723 2752 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fecb29e0-de4f-4ce6-8305-c177206400d6" containerName="cilium-operator" Mar 20 21:28:07.983773 kubelet[2752]: E0320 21:28:07.983732 2752 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="552218fd-bedf-4096-aa60-95b93cda75a6" containerName="mount-cgroup" Mar 20 21:28:07.983773 kubelet[2752]: E0320 21:28:07.983740 2752 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="552218fd-bedf-4096-aa60-95b93cda75a6" containerName="apply-sysctl-overwrites" Mar 20 21:28:07.983773 kubelet[2752]: E0320 21:28:07.983748 2752 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="552218fd-bedf-4096-aa60-95b93cda75a6" containerName="mount-bpf-fs" Mar 20 21:28:07.983941 kubelet[2752]: I0320 21:28:07.983806 2752 memory_manager.go:354] "RemoveStaleState removing state" podUID="fecb29e0-de4f-4ce6-8305-c177206400d6" containerName="cilium-operator" Mar 20 21:28:07.983941 kubelet[2752]: I0320 21:28:07.983818 2752 memory_manager.go:354] "RemoveStaleState removing state" podUID="552218fd-bedf-4096-aa60-95b93cda75a6" containerName="cilium-agent" Mar 20 21:28:07.985384 systemd[1]: sshd@25-10.0.0.79:22-10.0.0.1:45912.service: Deactivated successfully. Mar 20 21:28:07.987511 systemd[1]: session-26.scope: Deactivated successfully. Mar 20 21:28:07.992441 systemd-logind[1483]: Session 26 logged out. Waiting for processes to exit. Mar 20 21:28:08.003399 systemd[1]: Started sshd@26-10.0.0.79:22-10.0.0.1:45914.service - OpenSSH per-connection server daemon (10.0.0.1:45914). Mar 20 21:28:08.005670 systemd-logind[1483]: Removed session 26. Mar 20 21:28:08.019564 systemd[1]: Created slice kubepods-burstable-pod5c7cc8a2_007b_4a90_b5e2_f0b0bf2bb273.slice - libcontainer container kubepods-burstable-pod5c7cc8a2_007b_4a90_b5e2_f0b0bf2bb273.slice. Mar 20 21:28:08.053719 sshd[4524]: Accepted publickey for core from 10.0.0.1 port 45914 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:28:08.055063 sshd-session[4524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:28:08.059258 systemd-logind[1483]: New session 27 of user core. Mar 20 21:28:08.074492 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 20 21:28:08.123327 sshd[4527]: Connection closed by 10.0.0.1 port 45914 Mar 20 21:28:08.123801 sshd-session[4524]: pam_unix(sshd:session): session closed for user core Mar 20 21:28:08.132191 systemd[1]: sshd@26-10.0.0.79:22-10.0.0.1:45914.service: Deactivated successfully. Mar 20 21:28:08.134080 systemd[1]: session-27.scope: Deactivated successfully. Mar 20 21:28:08.135819 systemd-logind[1483]: Session 27 logged out. Waiting for processes to exit. Mar 20 21:28:08.137206 systemd[1]: Started sshd@27-10.0.0.79:22-10.0.0.1:45918.service - OpenSSH per-connection server daemon (10.0.0.1:45918). Mar 20 21:28:08.138136 systemd-logind[1483]: Removed session 27. Mar 20 21:28:08.180081 kubelet[2752]: I0320 21:28:08.180022 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273-cni-path\") pod \"cilium-6nzrd\" (UID: \"5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273\") " pod="kube-system/cilium-6nzrd" Mar 20 21:28:08.180081 kubelet[2752]: I0320 21:28:08.180056 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273-cilium-config-path\") pod \"cilium-6nzrd\" (UID: \"5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273\") " pod="kube-system/cilium-6nzrd" Mar 20 21:28:08.180081 kubelet[2752]: I0320 21:28:08.180076 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273-cilium-ipsec-secrets\") pod \"cilium-6nzrd\" (UID: \"5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273\") " pod="kube-system/cilium-6nzrd" Mar 20 21:28:08.180081 kubelet[2752]: I0320 21:28:08.180090 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsd5m\" (UniqueName: \"kubernetes.io/projected/5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273-kube-api-access-tsd5m\") pod \"cilium-6nzrd\" (UID: \"5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273\") " pod="kube-system/cilium-6nzrd" Mar 20 21:28:08.180385 kubelet[2752]: I0320 21:28:08.180107 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273-hostproc\") pod \"cilium-6nzrd\" (UID: \"5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273\") " pod="kube-system/cilium-6nzrd" Mar 20 21:28:08.180385 kubelet[2752]: I0320 21:28:08.180120 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273-bpf-maps\") pod \"cilium-6nzrd\" (UID: \"5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273\") " pod="kube-system/cilium-6nzrd" Mar 20 21:28:08.180385 kubelet[2752]: I0320 21:28:08.180133 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273-hubble-tls\") pod \"cilium-6nzrd\" (UID: \"5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273\") " pod="kube-system/cilium-6nzrd" Mar 20 21:28:08.180385 kubelet[2752]: I0320 21:28:08.180149 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273-cilium-run\") pod \"cilium-6nzrd\" (UID: \"5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273\") " pod="kube-system/cilium-6nzrd" Mar 20 21:28:08.180385 kubelet[2752]: I0320 21:28:08.180216 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273-lib-modules\") pod \"cilium-6nzrd\" (UID: \"5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273\") " pod="kube-system/cilium-6nzrd" Mar 20 21:28:08.180385 kubelet[2752]: I0320 21:28:08.180310 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273-cilium-cgroup\") pod \"cilium-6nzrd\" (UID: \"5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273\") " pod="kube-system/cilium-6nzrd" Mar 20 21:28:08.180591 kubelet[2752]: I0320 21:28:08.180344 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273-host-proc-sys-kernel\") pod \"cilium-6nzrd\" (UID: \"5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273\") " pod="kube-system/cilium-6nzrd" Mar 20 21:28:08.180591 kubelet[2752]: I0320 21:28:08.180360 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273-etc-cni-netd\") pod \"cilium-6nzrd\" (UID: \"5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273\") " pod="kube-system/cilium-6nzrd" Mar 20 21:28:08.180591 kubelet[2752]: I0320 21:28:08.180375 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273-clustermesh-secrets\") pod \"cilium-6nzrd\" (UID: \"5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273\") " pod="kube-system/cilium-6nzrd" Mar 20 21:28:08.180591 kubelet[2752]: I0320 21:28:08.180393 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273-xtables-lock\") pod \"cilium-6nzrd\" (UID: \"5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273\") " pod="kube-system/cilium-6nzrd" Mar 20 21:28:08.180591 kubelet[2752]: I0320 21:28:08.180407 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273-host-proc-sys-net\") pod \"cilium-6nzrd\" (UID: \"5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273\") " pod="kube-system/cilium-6nzrd" Mar 20 21:28:08.188577 sshd[4533]: Accepted publickey for core from 10.0.0.1 port 45918 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:28:08.190302 sshd-session[4533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:28:08.195181 systemd-logind[1483]: New session 28 of user core. Mar 20 21:28:08.207456 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 20 21:28:08.324184 kubelet[2752]: E0320 21:28:08.324059 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:08.324651 containerd[1501]: time="2025-03-20T21:28:08.324612251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6nzrd,Uid:5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273,Namespace:kube-system,Attempt:0,}" Mar 20 21:28:08.346186 containerd[1501]: time="2025-03-20T21:28:08.346150333Z" level=info msg="connecting to shim 6d123f065bdbf240385845f6e4e5144b1cbd67ea0ec386c59a190055c28bc63e" address="unix:///run/containerd/s/bc7c272051c57cc8bba900f44277e111fad32cd571b6ab488ab9c17a69408064" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:28:08.372506 systemd[1]: Started cri-containerd-6d123f065bdbf240385845f6e4e5144b1cbd67ea0ec386c59a190055c28bc63e.scope - libcontainer container 6d123f065bdbf240385845f6e4e5144b1cbd67ea0ec386c59a190055c28bc63e. Mar 20 21:28:08.401732 containerd[1501]: time="2025-03-20T21:28:08.401679039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6nzrd,Uid:5c7cc8a2-007b-4a90-b5e2-f0b0bf2bb273,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d123f065bdbf240385845f6e4e5144b1cbd67ea0ec386c59a190055c28bc63e\"" Mar 20 21:28:08.402443 kubelet[2752]: E0320 21:28:08.402362 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:08.404992 containerd[1501]: time="2025-03-20T21:28:08.404862927Z" level=info msg="CreateContainer within sandbox \"6d123f065bdbf240385845f6e4e5144b1cbd67ea0ec386c59a190055c28bc63e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 20 21:28:08.412575 containerd[1501]: time="2025-03-20T21:28:08.412499724Z" level=info msg="Container c72ab1b453bc2dfdef73a3c42d0e303c191d3e1c1a482fcfaa3cf50d318dd7bb: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:28:08.421452 containerd[1501]: time="2025-03-20T21:28:08.421408066Z" level=info msg="CreateContainer within sandbox \"6d123f065bdbf240385845f6e4e5144b1cbd67ea0ec386c59a190055c28bc63e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c72ab1b453bc2dfdef73a3c42d0e303c191d3e1c1a482fcfaa3cf50d318dd7bb\"" Mar 20 21:28:08.421980 containerd[1501]: time="2025-03-20T21:28:08.421939334Z" level=info msg="StartContainer for \"c72ab1b453bc2dfdef73a3c42d0e303c191d3e1c1a482fcfaa3cf50d318dd7bb\"" Mar 20 21:28:08.422735 containerd[1501]: time="2025-03-20T21:28:08.422703517Z" level=info msg="connecting to shim c72ab1b453bc2dfdef73a3c42d0e303c191d3e1c1a482fcfaa3cf50d318dd7bb" address="unix:///run/containerd/s/bc7c272051c57cc8bba900f44277e111fad32cd571b6ab488ab9c17a69408064" protocol=ttrpc version=3 Mar 20 21:28:08.443391 systemd[1]: Started cri-containerd-c72ab1b453bc2dfdef73a3c42d0e303c191d3e1c1a482fcfaa3cf50d318dd7bb.scope - libcontainer container c72ab1b453bc2dfdef73a3c42d0e303c191d3e1c1a482fcfaa3cf50d318dd7bb. Mar 20 21:28:08.470839 containerd[1501]: time="2025-03-20T21:28:08.470797474Z" level=info msg="StartContainer for \"c72ab1b453bc2dfdef73a3c42d0e303c191d3e1c1a482fcfaa3cf50d318dd7bb\" returns successfully" Mar 20 21:28:08.478213 systemd[1]: cri-containerd-c72ab1b453bc2dfdef73a3c42d0e303c191d3e1c1a482fcfaa3cf50d318dd7bb.scope: Deactivated successfully. Mar 20 21:28:08.480076 containerd[1501]: time="2025-03-20T21:28:08.480040457Z" level=info msg="received exit event container_id:\"c72ab1b453bc2dfdef73a3c42d0e303c191d3e1c1a482fcfaa3cf50d318dd7bb\" id:\"c72ab1b453bc2dfdef73a3c42d0e303c191d3e1c1a482fcfaa3cf50d318dd7bb\" pid:4606 exited_at:{seconds:1742506088 nanos:479776170}" Mar 20 21:28:08.480344 containerd[1501]: time="2025-03-20T21:28:08.480322106Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c72ab1b453bc2dfdef73a3c42d0e303c191d3e1c1a482fcfaa3cf50d318dd7bb\" id:\"c72ab1b453bc2dfdef73a3c42d0e303c191d3e1c1a482fcfaa3cf50d318dd7bb\" pid:4606 exited_at:{seconds:1742506088 nanos:479776170}" Mar 20 21:28:08.649561 kubelet[2752]: I0320 21:28:08.649437 2752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="552218fd-bedf-4096-aa60-95b93cda75a6" path="/var/lib/kubelet/pods/552218fd-bedf-4096-aa60-95b93cda75a6/volumes" Mar 20 21:28:08.650433 kubelet[2752]: I0320 21:28:08.650400 2752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fecb29e0-de4f-4ce6-8305-c177206400d6" path="/var/lib/kubelet/pods/fecb29e0-de4f-4ce6-8305-c177206400d6/volumes" Mar 20 21:28:08.828023 kubelet[2752]: E0320 21:28:08.827992 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:08.830254 containerd[1501]: time="2025-03-20T21:28:08.829769393Z" level=info msg="CreateContainer within sandbox \"6d123f065bdbf240385845f6e4e5144b1cbd67ea0ec386c59a190055c28bc63e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 20 21:28:08.837190 containerd[1501]: time="2025-03-20T21:28:08.837144829Z" level=info msg="Container 9fbf923aa8233b066bcf50b2f3a60ed84a86b80e9cb1db5eaa2f56d144b1570a: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:28:08.846008 containerd[1501]: time="2025-03-20T21:28:08.845963730Z" level=info msg="CreateContainer within sandbox \"6d123f065bdbf240385845f6e4e5144b1cbd67ea0ec386c59a190055c28bc63e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9fbf923aa8233b066bcf50b2f3a60ed84a86b80e9cb1db5eaa2f56d144b1570a\"" Mar 20 21:28:08.846782 containerd[1501]: time="2025-03-20T21:28:08.846647149Z" level=info msg="StartContainer for \"9fbf923aa8233b066bcf50b2f3a60ed84a86b80e9cb1db5eaa2f56d144b1570a\"" Mar 20 21:28:08.847704 containerd[1501]: time="2025-03-20T21:28:08.847678736Z" level=info msg="connecting to shim 9fbf923aa8233b066bcf50b2f3a60ed84a86b80e9cb1db5eaa2f56d144b1570a" address="unix:///run/containerd/s/bc7c272051c57cc8bba900f44277e111fad32cd571b6ab488ab9c17a69408064" protocol=ttrpc version=3 Mar 20 21:28:08.868538 systemd[1]: Started cri-containerd-9fbf923aa8233b066bcf50b2f3a60ed84a86b80e9cb1db5eaa2f56d144b1570a.scope - libcontainer container 9fbf923aa8233b066bcf50b2f3a60ed84a86b80e9cb1db5eaa2f56d144b1570a. Mar 20 21:28:08.899454 containerd[1501]: time="2025-03-20T21:28:08.899400590Z" level=info msg="StartContainer for \"9fbf923aa8233b066bcf50b2f3a60ed84a86b80e9cb1db5eaa2f56d144b1570a\" returns successfully" Mar 20 21:28:08.905562 systemd[1]: cri-containerd-9fbf923aa8233b066bcf50b2f3a60ed84a86b80e9cb1db5eaa2f56d144b1570a.scope: Deactivated successfully. Mar 20 21:28:08.906757 containerd[1501]: time="2025-03-20T21:28:08.906700352Z" level=info msg="received exit event container_id:\"9fbf923aa8233b066bcf50b2f3a60ed84a86b80e9cb1db5eaa2f56d144b1570a\" id:\"9fbf923aa8233b066bcf50b2f3a60ed84a86b80e9cb1db5eaa2f56d144b1570a\" pid:4652 exited_at:{seconds:1742506088 nanos:906297599}" Mar 20 21:28:08.906848 containerd[1501]: time="2025-03-20T21:28:08.906762500Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fbf923aa8233b066bcf50b2f3a60ed84a86b80e9cb1db5eaa2f56d144b1570a\" id:\"9fbf923aa8233b066bcf50b2f3a60ed84a86b80e9cb1db5eaa2f56d144b1570a\" pid:4652 exited_at:{seconds:1742506088 nanos:906297599}" Mar 20 21:28:08.996719 kubelet[2752]: I0320 21:28:08.996678 2752 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-20T21:28:08Z","lastTransitionTime":"2025-03-20T21:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 20 21:28:09.831491 kubelet[2752]: E0320 21:28:09.831459 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:09.833168 containerd[1501]: time="2025-03-20T21:28:09.833127097Z" level=info msg="CreateContainer within sandbox \"6d123f065bdbf240385845f6e4e5144b1cbd67ea0ec386c59a190055c28bc63e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 20 21:28:09.843095 containerd[1501]: time="2025-03-20T21:28:09.842678511Z" level=info msg="Container b32f2d28c645c025e9c6b04821b1819ed33b74405a96c460a1b3b921b147e578: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:28:09.856998 containerd[1501]: time="2025-03-20T21:28:09.856948047Z" level=info msg="CreateContainer within sandbox \"6d123f065bdbf240385845f6e4e5144b1cbd67ea0ec386c59a190055c28bc63e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b32f2d28c645c025e9c6b04821b1819ed33b74405a96c460a1b3b921b147e578\"" Mar 20 21:28:09.857464 containerd[1501]: time="2025-03-20T21:28:09.857414610Z" level=info msg="StartContainer for \"b32f2d28c645c025e9c6b04821b1819ed33b74405a96c460a1b3b921b147e578\"" Mar 20 21:28:09.858710 containerd[1501]: time="2025-03-20T21:28:09.858690052Z" level=info msg="connecting to shim b32f2d28c645c025e9c6b04821b1819ed33b74405a96c460a1b3b921b147e578" address="unix:///run/containerd/s/bc7c272051c57cc8bba900f44277e111fad32cd571b6ab488ab9c17a69408064" protocol=ttrpc version=3 Mar 20 21:28:09.882391 systemd[1]: Started cri-containerd-b32f2d28c645c025e9c6b04821b1819ed33b74405a96c460a1b3b921b147e578.scope - libcontainer container b32f2d28c645c025e9c6b04821b1819ed33b74405a96c460a1b3b921b147e578. Mar 20 21:28:09.924703 systemd[1]: cri-containerd-b32f2d28c645c025e9c6b04821b1819ed33b74405a96c460a1b3b921b147e578.scope: Deactivated successfully. Mar 20 21:28:09.925687 containerd[1501]: time="2025-03-20T21:28:09.925638280Z" level=info msg="received exit event container_id:\"b32f2d28c645c025e9c6b04821b1819ed33b74405a96c460a1b3b921b147e578\" id:\"b32f2d28c645c025e9c6b04821b1819ed33b74405a96c460a1b3b921b147e578\" pid:4697 exited_at:{seconds:1742506089 nanos:925460299}" Mar 20 21:28:09.925687 containerd[1501]: time="2025-03-20T21:28:09.925675581Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b32f2d28c645c025e9c6b04821b1819ed33b74405a96c460a1b3b921b147e578\" id:\"b32f2d28c645c025e9c6b04821b1819ed33b74405a96c460a1b3b921b147e578\" pid:4697 exited_at:{seconds:1742506089 nanos:925460299}" Mar 20 21:28:09.926682 containerd[1501]: time="2025-03-20T21:28:09.926661599Z" level=info msg="StartContainer for \"b32f2d28c645c025e9c6b04821b1819ed33b74405a96c460a1b3b921b147e578\" returns successfully" Mar 20 21:28:09.948175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b32f2d28c645c025e9c6b04821b1819ed33b74405a96c460a1b3b921b147e578-rootfs.mount: Deactivated successfully. Mar 20 21:28:10.835732 kubelet[2752]: E0320 21:28:10.835608 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:10.839373 containerd[1501]: time="2025-03-20T21:28:10.838729734Z" level=info msg="CreateContainer within sandbox \"6d123f065bdbf240385845f6e4e5144b1cbd67ea0ec386c59a190055c28bc63e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 20 21:28:10.850827 containerd[1501]: time="2025-03-20T21:28:10.850173810Z" level=info msg="Container 8308b907a2d20d74d593030767eff3d5860e9682a52fe408932855544ef7f0f2: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:28:10.862440 containerd[1501]: time="2025-03-20T21:28:10.862396265Z" level=info msg="CreateContainer within sandbox \"6d123f065bdbf240385845f6e4e5144b1cbd67ea0ec386c59a190055c28bc63e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8308b907a2d20d74d593030767eff3d5860e9682a52fe408932855544ef7f0f2\"" Mar 20 21:28:10.863007 containerd[1501]: time="2025-03-20T21:28:10.862988518Z" level=info msg="StartContainer for \"8308b907a2d20d74d593030767eff3d5860e9682a52fe408932855544ef7f0f2\"" Mar 20 21:28:10.863914 containerd[1501]: time="2025-03-20T21:28:10.863889181Z" level=info msg="connecting to shim 8308b907a2d20d74d593030767eff3d5860e9682a52fe408932855544ef7f0f2" address="unix:///run/containerd/s/bc7c272051c57cc8bba900f44277e111fad32cd571b6ab488ab9c17a69408064" protocol=ttrpc version=3 Mar 20 21:28:10.889401 systemd[1]: Started cri-containerd-8308b907a2d20d74d593030767eff3d5860e9682a52fe408932855544ef7f0f2.scope - libcontainer container 8308b907a2d20d74d593030767eff3d5860e9682a52fe408932855544ef7f0f2. Mar 20 21:28:10.916248 systemd[1]: cri-containerd-8308b907a2d20d74d593030767eff3d5860e9682a52fe408932855544ef7f0f2.scope: Deactivated successfully. Mar 20 21:28:10.916941 containerd[1501]: time="2025-03-20T21:28:10.916888805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8308b907a2d20d74d593030767eff3d5860e9682a52fe408932855544ef7f0f2\" id:\"8308b907a2d20d74d593030767eff3d5860e9682a52fe408932855544ef7f0f2\" pid:4736 exited_at:{seconds:1742506090 nanos:916529938}" Mar 20 21:28:10.918341 containerd[1501]: time="2025-03-20T21:28:10.918307509Z" level=info msg="received exit event container_id:\"8308b907a2d20d74d593030767eff3d5860e9682a52fe408932855544ef7f0f2\" id:\"8308b907a2d20d74d593030767eff3d5860e9682a52fe408932855544ef7f0f2\" pid:4736 exited_at:{seconds:1742506090 nanos:916529938}" Mar 20 21:28:10.926526 containerd[1501]: time="2025-03-20T21:28:10.926483983Z" level=info msg="StartContainer for \"8308b907a2d20d74d593030767eff3d5860e9682a52fe408932855544ef7f0f2\" returns successfully" Mar 20 21:28:10.940151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8308b907a2d20d74d593030767eff3d5860e9682a52fe408932855544ef7f0f2-rootfs.mount: Deactivated successfully. Mar 20 21:28:11.694458 kubelet[2752]: E0320 21:28:11.694415 2752 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 20 21:28:11.844550 kubelet[2752]: E0320 21:28:11.844480 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:11.847151 containerd[1501]: time="2025-03-20T21:28:11.846931564Z" level=info msg="CreateContainer within sandbox \"6d123f065bdbf240385845f6e4e5144b1cbd67ea0ec386c59a190055c28bc63e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 20 21:28:11.869620 containerd[1501]: time="2025-03-20T21:28:11.869568473Z" level=info msg="Container 772c670ad1fa83100b2e8da6ba6c5d193cd3b5b2f73039d7f312bcaddd9a89fd: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:28:11.877835 containerd[1501]: time="2025-03-20T21:28:11.877789303Z" level=info msg="CreateContainer within sandbox \"6d123f065bdbf240385845f6e4e5144b1cbd67ea0ec386c59a190055c28bc63e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"772c670ad1fa83100b2e8da6ba6c5d193cd3b5b2f73039d7f312bcaddd9a89fd\"" Mar 20 21:28:11.878421 containerd[1501]: time="2025-03-20T21:28:11.878378620Z" level=info msg="StartContainer for \"772c670ad1fa83100b2e8da6ba6c5d193cd3b5b2f73039d7f312bcaddd9a89fd\"" Mar 20 21:28:11.879472 containerd[1501]: time="2025-03-20T21:28:11.879432506Z" level=info msg="connecting to shim 772c670ad1fa83100b2e8da6ba6c5d193cd3b5b2f73039d7f312bcaddd9a89fd" address="unix:///run/containerd/s/bc7c272051c57cc8bba900f44277e111fad32cd571b6ab488ab9c17a69408064" protocol=ttrpc version=3 Mar 20 21:28:11.899389 systemd[1]: Started cri-containerd-772c670ad1fa83100b2e8da6ba6c5d193cd3b5b2f73039d7f312bcaddd9a89fd.scope - libcontainer container 772c670ad1fa83100b2e8da6ba6c5d193cd3b5b2f73039d7f312bcaddd9a89fd. Mar 20 21:28:11.934995 containerd[1501]: time="2025-03-20T21:28:11.934956013Z" level=info msg="StartContainer for \"772c670ad1fa83100b2e8da6ba6c5d193cd3b5b2f73039d7f312bcaddd9a89fd\" returns successfully" Mar 20 21:28:12.005925 containerd[1501]: time="2025-03-20T21:28:12.005507254Z" level=info msg="TaskExit event in podsandbox handler container_id:\"772c670ad1fa83100b2e8da6ba6c5d193cd3b5b2f73039d7f312bcaddd9a89fd\" id:\"b426cbf1e36f32c6d130e84662e27c8c8e84d5038f74678df6e2caf274e916ad\" pid:4803 exited_at:{seconds:1742506092 nanos:5199275}" Mar 20 21:28:12.339287 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 20 21:28:12.850876 kubelet[2752]: E0320 21:28:12.850834 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:14.326059 kubelet[2752]: E0320 21:28:14.326022 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:14.607015 containerd[1501]: time="2025-03-20T21:28:14.606738657Z" level=info msg="TaskExit event in podsandbox handler container_id:\"772c670ad1fa83100b2e8da6ba6c5d193cd3b5b2f73039d7f312bcaddd9a89fd\" id:\"bbc76ab85baf1f0182e4ee9dffe7bf82877397c16301e9cb578ce07c06aedb47\" pid:5148 exit_status:1 exited_at:{seconds:1742506094 nanos:606392686}" Mar 20 21:28:15.352717 systemd-networkd[1432]: lxc_health: Link UP Mar 20 21:28:15.362812 systemd-networkd[1432]: lxc_health: Gained carrier Mar 20 21:28:15.649373 kubelet[2752]: E0320 21:28:15.647853 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:16.327609 kubelet[2752]: E0320 21:28:16.327155 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:16.340708 kubelet[2752]: I0320 21:28:16.340641 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6nzrd" podStartSLOduration=9.34062074 podStartE2EDuration="9.34062074s" podCreationTimestamp="2025-03-20 21:28:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:28:12.866648865 +0000 UTC m=+86.398371277" watchObservedRunningTime="2025-03-20 21:28:16.34062074 +0000 UTC m=+89.872343153" Mar 20 21:28:16.463550 systemd-networkd[1432]: lxc_health: Gained IPv6LL Mar 20 21:28:16.710527 containerd[1501]: time="2025-03-20T21:28:16.710488909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"772c670ad1fa83100b2e8da6ba6c5d193cd3b5b2f73039d7f312bcaddd9a89fd\" id:\"b3e3a9720dad4af5448d7eb0c2490b137722f9386853316bde7b18dd713e4498\" pid:5374 exited_at:{seconds:1742506096 nanos:710179700}" Mar 20 21:28:16.857370 kubelet[2752]: E0320 21:28:16.857339 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:17.858941 kubelet[2752]: E0320 21:28:17.858900 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:28:18.791899 containerd[1501]: time="2025-03-20T21:28:18.791859641Z" level=info msg="TaskExit event in podsandbox handler container_id:\"772c670ad1fa83100b2e8da6ba6c5d193cd3b5b2f73039d7f312bcaddd9a89fd\" id:\"74256fa4c1f16ebe0e6db43bb92ebf301fc82a75e1e43fdff18576d9c051adbd\" pid:5408 exited_at:{seconds:1742506098 nanos:791547967}" Mar 20 21:28:20.940765 containerd[1501]: time="2025-03-20T21:28:20.940716568Z" level=info msg="TaskExit event in podsandbox handler container_id:\"772c670ad1fa83100b2e8da6ba6c5d193cd3b5b2f73039d7f312bcaddd9a89fd\" id:\"4f48ceb53723b75eb12ef11bed33f151e1b83d7569059ac7d730de1537512f87\" pid:5433 exited_at:{seconds:1742506100 nanos:940385557}" Mar 20 21:28:20.953106 sshd[4536]: Connection closed by 10.0.0.1 port 45918 Mar 20 21:28:20.953704 sshd-session[4533]: pam_unix(sshd:session): session closed for user core Mar 20 21:28:20.958368 systemd[1]: sshd@27-10.0.0.79:22-10.0.0.1:45918.service: Deactivated successfully. Mar 20 21:28:20.960492 systemd[1]: session-28.scope: Deactivated successfully. Mar 20 21:28:20.961176 systemd-logind[1483]: Session 28 logged out. Waiting for processes to exit. Mar 20 21:28:20.962056 systemd-logind[1483]: Removed session 28.