Mar 20 21:30:45.877483 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 20 19:36:47 -00 2025 Mar 20 21:30:45.877505 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=619bfa043b53ac975036e415994a80721794ae8277072d0a93c174b4f7768019 Mar 20 21:30:45.877516 kernel: BIOS-provided physical RAM map: Mar 20 21:30:45.877523 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 20 21:30:45.877530 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 20 21:30:45.877536 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 20 21:30:45.877543 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 20 21:30:45.877550 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 20 21:30:45.877557 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 20 21:30:45.877563 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 20 21:30:45.877572 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 20 21:30:45.877579 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 20 21:30:45.877585 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 20 21:30:45.877592 kernel: NX (Execute Disable) protection: active Mar 20 21:30:45.877600 kernel: APIC: Static calls initialized Mar 20 21:30:45.877609 kernel: SMBIOS 2.8 present. Mar 20 21:30:45.877616 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 20 21:30:45.877637 kernel: Hypervisor detected: KVM Mar 20 21:30:45.877644 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 20 21:30:45.877651 kernel: kvm-clock: using sched offset of 2307642279 cycles Mar 20 21:30:45.877659 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 20 21:30:45.877666 kernel: tsc: Detected 2794.750 MHz processor Mar 20 21:30:45.877674 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 20 21:30:45.877682 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 20 21:30:45.877689 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 20 21:30:45.877699 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 20 21:30:45.877706 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 20 21:30:45.877714 kernel: Using GB pages for direct mapping Mar 20 21:30:45.877722 kernel: ACPI: Early table checksum verification disabled Mar 20 21:30:45.877729 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 20 21:30:45.877736 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:30:45.877744 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:30:45.877751 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:30:45.877759 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 20 21:30:45.877768 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:30:45.877775 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:30:45.877783 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:30:45.877790 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:30:45.877797 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Mar 20 21:30:45.877805 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Mar 20 21:30:45.877815 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 20 21:30:45.877825 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Mar 20 21:30:45.877832 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Mar 20 21:30:45.877847 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Mar 20 21:30:45.877854 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Mar 20 21:30:45.877861 kernel: No NUMA configuration found Mar 20 21:30:45.877869 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 20 21:30:45.877876 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 20 21:30:45.877886 kernel: Zone ranges: Mar 20 21:30:45.877893 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 20 21:30:45.877901 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 20 21:30:45.877908 kernel: Normal empty Mar 20 21:30:45.877915 kernel: Movable zone start for each node Mar 20 21:30:45.877923 kernel: Early memory node ranges Mar 20 21:30:45.877930 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 20 21:30:45.877937 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 20 21:30:45.877945 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 20 21:30:45.877952 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 20 21:30:45.877962 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 20 21:30:45.877969 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 20 21:30:45.877977 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 20 21:30:45.877984 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 20 21:30:45.877992 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 20 21:30:45.877999 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 20 21:30:45.878006 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 20 21:30:45.878014 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 20 21:30:45.878021 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 20 21:30:45.878031 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 20 21:30:45.878038 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 20 21:30:45.878046 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 20 21:30:45.878053 kernel: TSC deadline timer available Mar 20 21:30:45.878060 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 20 21:30:45.878068 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 20 21:30:45.878075 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 20 21:30:45.878083 kernel: kvm-guest: setup PV sched yield Mar 20 21:30:45.878090 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 20 21:30:45.878100 kernel: Booting paravirtualized kernel on KVM Mar 20 21:30:45.878107 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 20 21:30:45.878115 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 20 21:30:45.878122 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Mar 20 21:30:45.878130 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Mar 20 21:30:45.878138 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 20 21:30:45.878148 kernel: kvm-guest: PV spinlocks enabled Mar 20 21:30:45.878158 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 20 21:30:45.878168 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=619bfa043b53ac975036e415994a80721794ae8277072d0a93c174b4f7768019 Mar 20 21:30:45.878179 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 20 21:30:45.878187 kernel: random: crng init done Mar 20 21:30:45.878194 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 20 21:30:45.878202 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 20 21:30:45.878209 kernel: Fallback order for Node 0: 0 Mar 20 21:30:45.878217 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 20 21:30:45.878224 kernel: Policy zone: DMA32 Mar 20 21:30:45.878232 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 20 21:30:45.878239 kernel: Memory: 2430496K/2571752K available (14336K kernel code, 2304K rwdata, 25060K rodata, 43592K init, 1472K bss, 140996K reserved, 0K cma-reserved) Mar 20 21:30:45.878249 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 20 21:30:45.878257 kernel: ftrace: allocating 37985 entries in 149 pages Mar 20 21:30:45.878264 kernel: ftrace: allocated 149 pages with 4 groups Mar 20 21:30:45.878271 kernel: Dynamic Preempt: voluntary Mar 20 21:30:45.878279 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 20 21:30:45.878287 kernel: rcu: RCU event tracing is enabled. Mar 20 21:30:45.878295 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 20 21:30:45.878303 kernel: Trampoline variant of Tasks RCU enabled. Mar 20 21:30:45.878312 kernel: Rude variant of Tasks RCU enabled. Mar 20 21:30:45.878320 kernel: Tracing variant of Tasks RCU enabled. Mar 20 21:30:45.878327 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 20 21:30:45.878335 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 20 21:30:45.878342 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 20 21:30:45.878350 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 20 21:30:45.878357 kernel: Console: colour VGA+ 80x25 Mar 20 21:30:45.878364 kernel: printk: console [ttyS0] enabled Mar 20 21:30:45.878372 kernel: ACPI: Core revision 20230628 Mar 20 21:30:45.878379 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 20 21:30:45.878389 kernel: APIC: Switch to symmetric I/O mode setup Mar 20 21:30:45.878396 kernel: x2apic enabled Mar 20 21:30:45.878404 kernel: APIC: Switched APIC routing to: physical x2apic Mar 20 21:30:45.878411 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 20 21:30:45.878419 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 20 21:30:45.878426 kernel: kvm-guest: setup PV IPIs Mar 20 21:30:45.878441 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 20 21:30:45.878452 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 20 21:30:45.878459 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Mar 20 21:30:45.878467 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 20 21:30:45.878475 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 20 21:30:45.878485 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 20 21:30:45.878493 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 20 21:30:45.878500 kernel: Spectre V2 : Mitigation: Retpolines Mar 20 21:30:45.878508 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 20 21:30:45.878516 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 20 21:30:45.878526 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 20 21:30:45.878533 kernel: RETBleed: Mitigation: untrained return thunk Mar 20 21:30:45.878541 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 20 21:30:45.878549 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 20 21:30:45.878557 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 20 21:30:45.878565 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 20 21:30:45.878573 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 20 21:30:45.878580 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 20 21:30:45.878590 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 20 21:30:45.878598 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 20 21:30:45.878606 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 20 21:30:45.878614 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 20 21:30:45.878666 kernel: Freeing SMP alternatives memory: 32K Mar 20 21:30:45.878674 kernel: pid_max: default: 32768 minimum: 301 Mar 20 21:30:45.878682 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 20 21:30:45.878690 kernel: landlock: Up and running. Mar 20 21:30:45.878697 kernel: SELinux: Initializing. Mar 20 21:30:45.878708 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 21:30:45.878719 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 21:30:45.878727 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 20 21:30:45.878735 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 21:30:45.878742 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 21:30:45.878750 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 21:30:45.878758 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 20 21:30:45.878766 kernel: ... version: 0 Mar 20 21:30:45.878773 kernel: ... bit width: 48 Mar 20 21:30:45.878783 kernel: ... generic registers: 6 Mar 20 21:30:45.878791 kernel: ... value mask: 0000ffffffffffff Mar 20 21:30:45.878798 kernel: ... max period: 00007fffffffffff Mar 20 21:30:45.878806 kernel: ... fixed-purpose events: 0 Mar 20 21:30:45.878814 kernel: ... event mask: 000000000000003f Mar 20 21:30:45.878821 kernel: signal: max sigframe size: 1776 Mar 20 21:30:45.878829 kernel: rcu: Hierarchical SRCU implementation. Mar 20 21:30:45.878843 kernel: rcu: Max phase no-delay instances is 400. Mar 20 21:30:45.878851 kernel: smp: Bringing up secondary CPUs ... Mar 20 21:30:45.878861 kernel: smpboot: x86: Booting SMP configuration: Mar 20 21:30:45.878869 kernel: .... node #0, CPUs: #1 #2 #3 Mar 20 21:30:45.878876 kernel: smp: Brought up 1 node, 4 CPUs Mar 20 21:30:45.878884 kernel: smpboot: Max logical packages: 1 Mar 20 21:30:45.878892 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Mar 20 21:30:45.878899 kernel: devtmpfs: initialized Mar 20 21:30:45.878907 kernel: x86/mm: Memory block size: 128MB Mar 20 21:30:45.878915 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 20 21:30:45.878923 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 20 21:30:45.878931 kernel: pinctrl core: initialized pinctrl subsystem Mar 20 21:30:45.878941 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 20 21:30:45.878948 kernel: audit: initializing netlink subsys (disabled) Mar 20 21:30:45.878956 kernel: audit: type=2000 audit(1742506246.265:1): state=initialized audit_enabled=0 res=1 Mar 20 21:30:45.878964 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 20 21:30:45.878972 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 20 21:30:45.878979 kernel: cpuidle: using governor menu Mar 20 21:30:45.878987 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 20 21:30:45.878994 kernel: dca service started, version 1.12.1 Mar 20 21:30:45.879002 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 20 21:30:45.879012 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 20 21:30:45.879020 kernel: PCI: Using configuration type 1 for base access Mar 20 21:30:45.879028 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 20 21:30:45.879036 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 20 21:30:45.879043 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 20 21:30:45.879051 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 20 21:30:45.879059 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 20 21:30:45.879066 kernel: ACPI: Added _OSI(Module Device) Mar 20 21:30:45.879076 kernel: ACPI: Added _OSI(Processor Device) Mar 20 21:30:45.879084 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 20 21:30:45.879091 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 20 21:30:45.879099 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 20 21:30:45.879107 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 20 21:30:45.879114 kernel: ACPI: Interpreter enabled Mar 20 21:30:45.879122 kernel: ACPI: PM: (supports S0 S3 S5) Mar 20 21:30:45.879130 kernel: ACPI: Using IOAPIC for interrupt routing Mar 20 21:30:45.879139 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 20 21:30:45.879150 kernel: PCI: Using E820 reservations for host bridge windows Mar 20 21:30:45.879163 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 20 21:30:45.879170 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 20 21:30:45.879352 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 20 21:30:45.879482 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 20 21:30:45.879604 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 20 21:30:45.879614 kernel: PCI host bridge to bus 0000:00 Mar 20 21:30:45.879758 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 20 21:30:45.879885 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 20 21:30:45.879998 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 20 21:30:45.880110 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 20 21:30:45.880231 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 20 21:30:45.880344 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 20 21:30:45.880455 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 20 21:30:45.880598 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 20 21:30:45.880751 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 20 21:30:45.880883 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 20 21:30:45.881009 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 20 21:30:45.881130 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 20 21:30:45.881262 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 20 21:30:45.881404 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 20 21:30:45.881533 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 20 21:30:45.881679 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 20 21:30:45.881804 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 20 21:30:45.881946 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 20 21:30:45.882072 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 20 21:30:45.882206 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 20 21:30:45.882332 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 20 21:30:45.882472 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 20 21:30:45.882602 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 20 21:30:45.882744 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 20 21:30:45.882876 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 20 21:30:45.883000 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 20 21:30:45.883131 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 20 21:30:45.883279 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 20 21:30:45.883477 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 20 21:30:45.883602 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 20 21:30:45.883763 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 20 21:30:45.883943 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 20 21:30:45.884093 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 20 21:30:45.884105 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 20 21:30:45.884119 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 20 21:30:45.884127 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 20 21:30:45.884135 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 20 21:30:45.884145 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 20 21:30:45.884156 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 20 21:30:45.884167 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 20 21:30:45.884177 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 20 21:30:45.884188 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 20 21:30:45.884198 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 20 21:30:45.884212 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 20 21:30:45.884222 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 20 21:30:45.884233 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 20 21:30:45.884243 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 20 21:30:45.884251 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 20 21:30:45.884259 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 20 21:30:45.884267 kernel: iommu: Default domain type: Translated Mar 20 21:30:45.884275 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 20 21:30:45.884282 kernel: PCI: Using ACPI for IRQ routing Mar 20 21:30:45.884292 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 20 21:30:45.884301 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 20 21:30:45.884309 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 20 21:30:45.884437 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 20 21:30:45.884616 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 20 21:30:45.884787 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 20 21:30:45.884798 kernel: vgaarb: loaded Mar 20 21:30:45.884807 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 20 21:30:45.884819 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 20 21:30:45.884827 kernel: clocksource: Switched to clocksource kvm-clock Mar 20 21:30:45.884845 kernel: VFS: Disk quotas dquot_6.6.0 Mar 20 21:30:45.884854 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 20 21:30:45.884861 kernel: pnp: PnP ACPI init Mar 20 21:30:45.885030 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 20 21:30:45.885044 kernel: pnp: PnP ACPI: found 6 devices Mar 20 21:30:45.885052 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 20 21:30:45.885060 kernel: NET: Registered PF_INET protocol family Mar 20 21:30:45.885071 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 20 21:30:45.885079 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 20 21:30:45.885087 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 20 21:30:45.885095 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 20 21:30:45.885103 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 20 21:30:45.885111 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 20 21:30:45.885119 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 21:30:45.885126 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 21:30:45.885137 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 20 21:30:45.885148 kernel: NET: Registered PF_XDP protocol family Mar 20 21:30:45.885273 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 20 21:30:45.885388 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 20 21:30:45.885500 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 20 21:30:45.885615 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 20 21:30:45.885747 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 20 21:30:45.885867 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 20 21:30:45.885878 kernel: PCI: CLS 0 bytes, default 64 Mar 20 21:30:45.885891 kernel: Initialise system trusted keyrings Mar 20 21:30:45.885901 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 20 21:30:45.885910 kernel: Key type asymmetric registered Mar 20 21:30:45.885919 kernel: Asymmetric key parser 'x509' registered Mar 20 21:30:45.885927 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 20 21:30:45.885935 kernel: io scheduler mq-deadline registered Mar 20 21:30:45.885943 kernel: io scheduler kyber registered Mar 20 21:30:45.885951 kernel: io scheduler bfq registered Mar 20 21:30:45.885958 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 20 21:30:45.885970 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 20 21:30:45.885978 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 20 21:30:45.885986 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 20 21:30:45.885994 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 20 21:30:45.886002 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 20 21:30:45.886010 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 20 21:30:45.886018 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 20 21:30:45.886025 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 20 21:30:45.886200 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 20 21:30:45.886331 kernel: rtc_cmos 00:04: registered as rtc0 Mar 20 21:30:45.886343 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 20 21:30:45.886456 kernel: rtc_cmos 00:04: setting system clock to 2025-03-20T21:30:45 UTC (1742506245) Mar 20 21:30:45.886571 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 20 21:30:45.886582 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 20 21:30:45.886590 kernel: NET: Registered PF_INET6 protocol family Mar 20 21:30:45.886598 kernel: Segment Routing with IPv6 Mar 20 21:30:45.886605 kernel: In-situ OAM (IOAM) with IPv6 Mar 20 21:30:45.886682 kernel: NET: Registered PF_PACKET protocol family Mar 20 21:30:45.886691 kernel: Key type dns_resolver registered Mar 20 21:30:45.886698 kernel: IPI shorthand broadcast: enabled Mar 20 21:30:45.886706 kernel: sched_clock: Marking stable (538003045, 103388975)->(685244543, -43852523) Mar 20 21:30:45.886714 kernel: registered taskstats version 1 Mar 20 21:30:45.886722 kernel: Loading compiled-in X.509 certificates Mar 20 21:30:45.886730 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 9e7923b67df1c6f0613bc4380f7ea8de9ce851ac' Mar 20 21:30:45.886738 kernel: Key type .fscrypt registered Mar 20 21:30:45.886746 kernel: Key type fscrypt-provisioning registered Mar 20 21:30:45.886757 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 20 21:30:45.886765 kernel: ima: Allocated hash algorithm: sha1 Mar 20 21:30:45.886773 kernel: ima: No architecture policies found Mar 20 21:30:45.886780 kernel: clk: Disabling unused clocks Mar 20 21:30:45.886788 kernel: Freeing unused kernel image (initmem) memory: 43592K Mar 20 21:30:45.886796 kernel: Write protecting the kernel read-only data: 40960k Mar 20 21:30:45.886804 kernel: Freeing unused kernel image (rodata/data gap) memory: 1564K Mar 20 21:30:45.886812 kernel: Run /init as init process Mar 20 21:30:45.886820 kernel: with arguments: Mar 20 21:30:45.886830 kernel: /init Mar 20 21:30:45.886845 kernel: with environment: Mar 20 21:30:45.886853 kernel: HOME=/ Mar 20 21:30:45.886861 kernel: TERM=linux Mar 20 21:30:45.886868 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 20 21:30:45.886877 systemd[1]: Successfully made /usr/ read-only. Mar 20 21:30:45.886888 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 21:30:45.886900 systemd[1]: Detected virtualization kvm. Mar 20 21:30:45.886908 systemd[1]: Detected architecture x86-64. Mar 20 21:30:45.886916 systemd[1]: Running in initrd. Mar 20 21:30:45.886924 systemd[1]: No hostname configured, using default hostname. Mar 20 21:30:45.886933 systemd[1]: Hostname set to . Mar 20 21:30:45.886941 systemd[1]: Initializing machine ID from VM UUID. Mar 20 21:30:45.886949 systemd[1]: Queued start job for default target initrd.target. Mar 20 21:30:45.886957 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:30:45.886969 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:30:45.886989 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 20 21:30:45.887001 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 21:30:45.887010 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 20 21:30:45.887019 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 20 21:30:45.887031 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 20 21:30:45.887040 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 20 21:30:45.887048 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:30:45.887057 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:30:45.887065 systemd[1]: Reached target paths.target - Path Units. Mar 20 21:30:45.887082 systemd[1]: Reached target slices.target - Slice Units. Mar 20 21:30:45.887098 systemd[1]: Reached target swap.target - Swaps. Mar 20 21:30:45.887113 systemd[1]: Reached target timers.target - Timer Units. Mar 20 21:30:45.887122 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 21:30:45.887134 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 21:30:45.887145 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 20 21:30:45.887157 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 20 21:30:45.887171 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:30:45.887180 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 21:30:45.887188 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:30:45.887197 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 21:30:45.887205 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 20 21:30:45.887217 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 21:30:45.887226 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 20 21:30:45.887235 systemd[1]: Starting systemd-fsck-usr.service... Mar 20 21:30:45.887243 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 21:30:45.887252 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 21:30:45.887260 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:30:45.887269 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 20 21:30:45.887277 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:30:45.887289 systemd[1]: Finished systemd-fsck-usr.service. Mar 20 21:30:45.887322 systemd-journald[193]: Collecting audit messages is disabled. Mar 20 21:30:45.887348 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 20 21:30:45.887357 systemd-journald[193]: Journal started Mar 20 21:30:45.887378 systemd-journald[193]: Runtime Journal (/run/log/journal/f866187b59314eed8cf5206da4ed10c9) is 6M, max 48.3M, 42.3M free. Mar 20 21:30:45.884990 systemd-modules-load[194]: Inserted module 'overlay' Mar 20 21:30:45.913298 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 21:30:45.913320 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 20 21:30:45.913333 kernel: Bridge firewalling registered Mar 20 21:30:45.913153 systemd-modules-load[194]: Inserted module 'br_netfilter' Mar 20 21:30:45.920120 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 21:30:45.922510 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:30:45.925452 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 21:30:45.931241 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:30:45.934390 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:30:45.944293 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 21:30:45.946810 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 21:30:45.955817 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:30:45.956116 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:30:45.962774 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:30:45.966028 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 21:30:45.969483 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:30:45.980323 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 20 21:30:45.995538 dracut-cmdline[233]: dracut-dracut-053 Mar 20 21:30:45.998886 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=619bfa043b53ac975036e415994a80721794ae8277072d0a93c174b4f7768019 Mar 20 21:30:46.013464 systemd-resolved[227]: Positive Trust Anchors: Mar 20 21:30:46.013482 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 21:30:46.013514 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 21:30:46.016118 systemd-resolved[227]: Defaulting to hostname 'linux'. Mar 20 21:30:46.017273 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 21:30:46.023680 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:30:46.096663 kernel: SCSI subsystem initialized Mar 20 21:30:46.105655 kernel: Loading iSCSI transport class v2.0-870. Mar 20 21:30:46.116648 kernel: iscsi: registered transport (tcp) Mar 20 21:30:46.137660 kernel: iscsi: registered transport (qla4xxx) Mar 20 21:30:46.137731 kernel: QLogic iSCSI HBA Driver Mar 20 21:30:46.181249 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 20 21:30:46.183413 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 20 21:30:46.226378 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 20 21:30:46.226469 kernel: device-mapper: uevent: version 1.0.3 Mar 20 21:30:46.226487 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 20 21:30:46.266642 kernel: raid6: avx2x4 gen() 30208 MB/s Mar 20 21:30:46.283655 kernel: raid6: avx2x2 gen() 30766 MB/s Mar 20 21:30:46.300733 kernel: raid6: avx2x1 gen() 25693 MB/s Mar 20 21:30:46.300758 kernel: raid6: using algorithm avx2x2 gen() 30766 MB/s Mar 20 21:30:46.318746 kernel: raid6: .... xor() 19702 MB/s, rmw enabled Mar 20 21:30:46.318767 kernel: raid6: using avx2x2 recovery algorithm Mar 20 21:30:46.339650 kernel: xor: automatically using best checksumming function avx Mar 20 21:30:46.492659 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 20 21:30:46.505928 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 20 21:30:46.509770 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:30:46.536571 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 20 21:30:46.541846 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:30:46.546328 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 20 21:30:46.578316 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Mar 20 21:30:46.611402 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 21:30:46.613169 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 21:30:46.693134 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:30:46.698010 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 20 21:30:46.719587 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 20 21:30:46.721681 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 21:30:46.722485 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:30:46.722801 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 21:30:46.723958 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 20 21:30:46.737543 kernel: cryptd: max_cpu_qlen set to 1000 Mar 20 21:30:46.753776 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 20 21:30:46.760890 kernel: libata version 3.00 loaded. Mar 20 21:30:46.760915 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 20 21:30:46.779036 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 20 21:30:46.779224 kernel: AVX2 version of gcm_enc/dec engaged. Mar 20 21:30:46.779239 kernel: AES CTR mode by8 optimization enabled Mar 20 21:30:46.779263 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 20 21:30:46.779277 kernel: GPT:9289727 != 19775487 Mar 20 21:30:46.779290 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 20 21:30:46.779303 kernel: GPT:9289727 != 19775487 Mar 20 21:30:46.779316 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 20 21:30:46.779328 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:30:46.779342 kernel: ahci 0000:00:1f.2: version 3.0 Mar 20 21:30:46.816590 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 20 21:30:46.816607 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 20 21:30:46.816787 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 20 21:30:46.816943 kernel: scsi host0: ahci Mar 20 21:30:46.817102 kernel: scsi host1: ahci Mar 20 21:30:46.817361 kernel: scsi host2: ahci Mar 20 21:30:46.817513 kernel: scsi host3: ahci Mar 20 21:30:46.819788 kernel: scsi host4: ahci Mar 20 21:30:46.819954 kernel: scsi host5: ahci Mar 20 21:30:46.820109 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 20 21:30:46.820122 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 20 21:30:46.820132 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 20 21:30:46.820143 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 20 21:30:46.820153 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 20 21:30:46.820164 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 20 21:30:46.820174 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (474) Mar 20 21:30:46.820189 kernel: BTRFS: device fsid 48a514e8-9ecc-46c2-935b-caca347f921e devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (476) Mar 20 21:30:46.777871 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 21:30:46.777995 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:30:46.779785 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:30:46.792252 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 21:30:46.792433 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:30:46.795643 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:30:46.798567 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:30:46.836326 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 20 21:30:46.875481 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:30:46.889549 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 20 21:30:46.913410 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 21:30:46.926157 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 20 21:30:46.927447 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 20 21:30:46.929646 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 20 21:30:46.931781 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:30:46.977714 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:30:47.117167 disk-uuid[570]: Primary Header is updated. Mar 20 21:30:47.117167 disk-uuid[570]: Secondary Entries is updated. Mar 20 21:30:47.117167 disk-uuid[570]: Secondary Header is updated. Mar 20 21:30:47.122680 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:30:47.127892 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:30:47.127933 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 20 21:30:47.131315 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 20 21:30:47.131379 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 20 21:30:47.131396 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 20 21:30:47.131411 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 20 21:30:47.133650 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 20 21:30:47.133721 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 20 21:30:47.135857 kernel: ata3.00: applying bridge limits Mar 20 21:30:47.138213 kernel: ata3.00: configured for UDMA/100 Mar 20 21:30:47.138251 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 20 21:30:47.190664 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 20 21:30:47.207287 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 20 21:30:47.207309 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 20 21:30:48.126650 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:30:48.127268 disk-uuid[579]: The operation has completed successfully. Mar 20 21:30:48.165714 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 20 21:30:48.165839 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 20 21:30:48.197363 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 20 21:30:48.214925 sh[594]: Success Mar 20 21:30:48.226654 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 20 21:30:48.259453 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 20 21:30:48.263248 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 20 21:30:48.276845 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 20 21:30:48.283715 kernel: BTRFS info (device dm-0): first mount of filesystem 48a514e8-9ecc-46c2-935b-caca347f921e Mar 20 21:30:48.283748 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 20 21:30:48.283759 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 20 21:30:48.284717 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 20 21:30:48.286048 kernel: BTRFS info (device dm-0): using free space tree Mar 20 21:30:48.289854 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 20 21:30:48.292002 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 20 21:30:48.294491 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 20 21:30:48.296980 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 20 21:30:48.320918 kernel: BTRFS info (device vda6): first mount of filesystem c415ef49-5595-4a0b-ba48-8f3e642f303e Mar 20 21:30:48.320977 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 20 21:30:48.320987 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:30:48.324653 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:30:48.328784 kernel: BTRFS info (device vda6): last unmount of filesystem c415ef49-5595-4a0b-ba48-8f3e642f303e Mar 20 21:30:48.334249 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 20 21:30:48.335346 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 20 21:30:48.398866 ignition[685]: Ignition 2.20.0 Mar 20 21:30:48.398877 ignition[685]: Stage: fetch-offline Mar 20 21:30:48.398916 ignition[685]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:30:48.398925 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:30:48.399015 ignition[685]: parsed url from cmdline: "" Mar 20 21:30:48.399020 ignition[685]: no config URL provided Mar 20 21:30:48.399025 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Mar 20 21:30:48.399033 ignition[685]: no config at "/usr/lib/ignition/user.ign" Mar 20 21:30:48.399066 ignition[685]: op(1): [started] loading QEMU firmware config module Mar 20 21:30:48.399071 ignition[685]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 20 21:30:48.407970 ignition[685]: op(1): [finished] loading QEMU firmware config module Mar 20 21:30:48.407996 ignition[685]: QEMU firmware config was not found. Ignoring... Mar 20 21:30:48.410321 ignition[685]: parsing config with SHA512: 49a69a9474a17754491e4f7476d43840477108b07b5e41ae3d608a0b922df50fffe09e782187b309416642f000d8db432570fe6b1070bfa351782a66d5c244dc Mar 20 21:30:48.413500 unknown[685]: fetched base config from "system" Mar 20 21:30:48.413511 unknown[685]: fetched user config from "qemu" Mar 20 21:30:48.413780 ignition[685]: fetch-offline: fetch-offline passed Mar 20 21:30:48.413845 ignition[685]: Ignition finished successfully Mar 20 21:30:48.416212 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 21:30:48.426375 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 21:30:48.429296 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 21:30:48.475197 systemd-networkd[782]: lo: Link UP Mar 20 21:30:48.475210 systemd-networkd[782]: lo: Gained carrier Mar 20 21:30:48.476898 systemd-networkd[782]: Enumeration completed Mar 20 21:30:48.477006 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 21:30:48.477250 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:30:48.477255 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 21:30:48.478045 systemd-networkd[782]: eth0: Link UP Mar 20 21:30:48.478048 systemd-networkd[782]: eth0: Gained carrier Mar 20 21:30:48.478055 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:30:48.479094 systemd[1]: Reached target network.target - Network. Mar 20 21:30:48.481144 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 20 21:30:48.481885 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 20 21:30:48.496679 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 21:30:48.505789 ignition[785]: Ignition 2.20.0 Mar 20 21:30:48.505800 ignition[785]: Stage: kargs Mar 20 21:30:48.505958 ignition[785]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:30:48.505971 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:30:48.509482 ignition[785]: kargs: kargs passed Mar 20 21:30:48.509536 ignition[785]: Ignition finished successfully Mar 20 21:30:48.513198 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 20 21:30:48.516216 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 20 21:30:48.540411 ignition[794]: Ignition 2.20.0 Mar 20 21:30:48.540422 ignition[794]: Stage: disks Mar 20 21:30:48.540579 ignition[794]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:30:48.540590 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:30:48.544115 ignition[794]: disks: disks passed Mar 20 21:30:48.544166 ignition[794]: Ignition finished successfully Mar 20 21:30:48.546839 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 20 21:30:48.548998 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 20 21:30:48.551166 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 20 21:30:48.553523 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 21:30:48.555493 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 21:30:48.557506 systemd[1]: Reached target basic.target - Basic System. Mar 20 21:30:48.560254 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 20 21:30:48.585675 systemd-resolved[227]: Detected conflict on linux IN A 10.0.0.139 Mar 20 21:30:48.585689 systemd-resolved[227]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Mar 20 21:30:48.589416 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 20 21:30:48.595414 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 20 21:30:48.596485 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 20 21:30:48.695640 kernel: EXT4-fs (vda9): mounted filesystem 79cdbe74-6884-4c57-b04d-c9a431509f16 r/w with ordered data mode. Quota mode: none. Mar 20 21:30:48.695817 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 20 21:30:48.697916 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 20 21:30:48.701160 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 21:30:48.703577 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 20 21:30:48.705503 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 20 21:30:48.705548 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 20 21:30:48.705570 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 21:30:48.723568 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 20 21:30:48.727053 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 20 21:30:48.729563 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (813) Mar 20 21:30:48.731669 kernel: BTRFS info (device vda6): first mount of filesystem c415ef49-5595-4a0b-ba48-8f3e642f303e Mar 20 21:30:48.731694 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 20 21:30:48.731708 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:30:48.734645 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:30:48.739892 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 21:30:48.771736 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Mar 20 21:30:48.775739 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Mar 20 21:30:48.780656 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Mar 20 21:30:48.784425 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Mar 20 21:30:48.869559 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 20 21:30:48.871651 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 20 21:30:48.873273 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 20 21:30:48.892664 kernel: BTRFS info (device vda6): last unmount of filesystem c415ef49-5595-4a0b-ba48-8f3e642f303e Mar 20 21:30:48.907804 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 20 21:30:48.919307 ignition[927]: INFO : Ignition 2.20.0 Mar 20 21:30:48.919307 ignition[927]: INFO : Stage: mount Mar 20 21:30:48.921086 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:30:48.921086 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:30:48.921086 ignition[927]: INFO : mount: mount passed Mar 20 21:30:48.921086 ignition[927]: INFO : Ignition finished successfully Mar 20 21:30:48.922488 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 20 21:30:48.925291 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 20 21:30:49.283227 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 20 21:30:49.284904 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 21:30:49.306645 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (940) Mar 20 21:30:49.309239 kernel: BTRFS info (device vda6): first mount of filesystem c415ef49-5595-4a0b-ba48-8f3e642f303e Mar 20 21:30:49.309254 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 20 21:30:49.309264 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:30:49.311653 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:30:49.313252 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 21:30:49.339773 ignition[957]: INFO : Ignition 2.20.0 Mar 20 21:30:49.339773 ignition[957]: INFO : Stage: files Mar 20 21:30:49.341540 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:30:49.341540 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:30:49.341540 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 20 21:30:49.341540 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 20 21:30:49.341540 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 20 21:30:49.347980 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 20 21:30:49.347980 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 20 21:30:49.347980 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 20 21:30:49.347980 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Mar 20 21:30:49.347980 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Mar 20 21:30:49.347980 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 21:30:49.347980 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 21:30:49.347980 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 20 21:30:49.347980 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 20 21:30:49.347980 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 20 21:30:49.347980 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Mar 20 21:30:49.344279 unknown[957]: wrote ssh authorized keys file for user: core Mar 20 21:30:49.649842 systemd-networkd[782]: eth0: Gained IPv6LL Mar 20 21:30:49.703610 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Mar 20 21:30:50.215981 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 20 21:30:50.215981 ignition[957]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Mar 20 21:30:50.220322 ignition[957]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 21:30:50.220322 ignition[957]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 21:30:50.220322 ignition[957]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Mar 20 21:30:50.220322 ignition[957]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Mar 20 21:30:50.233393 ignition[957]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 21:30:50.237725 ignition[957]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 21:30:50.239300 ignition[957]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Mar 20 21:30:50.239300 ignition[957]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 20 21:30:50.239300 ignition[957]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 20 21:30:50.239300 ignition[957]: INFO : files: files passed Mar 20 21:30:50.239300 ignition[957]: INFO : Ignition finished successfully Mar 20 21:30:50.240733 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 20 21:30:50.242876 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 20 21:30:50.245299 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 20 21:30:50.262127 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 20 21:30:50.262264 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 20 21:30:50.272469 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Mar 20 21:30:50.270445 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 21:30:50.278273 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:30:50.278273 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:30:50.273260 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 20 21:30:50.282527 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:30:50.275930 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 20 21:30:50.330835 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 20 21:30:50.330978 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 20 21:30:50.334019 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 20 21:30:50.336098 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 20 21:30:50.336218 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 20 21:30:50.337157 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 20 21:30:50.367615 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 21:30:50.370365 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 20 21:30:50.393239 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:30:50.394750 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:30:50.397428 systemd[1]: Stopped target timers.target - Timer Units. Mar 20 21:30:50.399829 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 20 21:30:50.399940 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 21:30:50.402456 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 20 21:30:50.404177 systemd[1]: Stopped target basic.target - Basic System. Mar 20 21:30:50.406239 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 20 21:30:50.408573 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 21:30:50.410908 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 20 21:30:50.413379 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 20 21:30:50.415815 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 21:30:50.418458 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 20 21:30:50.420548 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 20 21:30:50.422747 systemd[1]: Stopped target swap.target - Swaps. Mar 20 21:30:50.424814 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 20 21:30:50.424984 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 20 21:30:50.427086 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:30:50.428513 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:30:50.430702 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 20 21:30:50.430820 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:30:50.432915 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 20 21:30:50.433042 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 20 21:30:50.435465 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 20 21:30:50.435602 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 21:30:50.437434 systemd[1]: Stopped target paths.target - Path Units. Mar 20 21:30:50.439132 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 20 21:30:50.442668 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:30:50.444766 systemd[1]: Stopped target slices.target - Slice Units. Mar 20 21:30:50.446736 systemd[1]: Stopped target sockets.target - Socket Units. Mar 20 21:30:50.448521 systemd[1]: iscsid.socket: Deactivated successfully. Mar 20 21:30:50.448635 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 21:30:50.450526 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 20 21:30:50.450609 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 21:30:50.452983 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 20 21:30:50.453104 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 21:30:50.455041 systemd[1]: ignition-files.service: Deactivated successfully. Mar 20 21:30:50.455149 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 20 21:30:50.457803 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 20 21:30:50.459454 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 20 21:30:50.459565 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:30:50.462320 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 20 21:30:50.463314 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 20 21:30:50.463427 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:30:50.465740 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 20 21:30:50.465861 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 21:30:50.482937 ignition[1012]: INFO : Ignition 2.20.0 Mar 20 21:30:50.482937 ignition[1012]: INFO : Stage: umount Mar 20 21:30:50.482937 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:30:50.482937 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:30:50.482937 ignition[1012]: INFO : umount: umount passed Mar 20 21:30:50.482937 ignition[1012]: INFO : Ignition finished successfully Mar 20 21:30:50.472357 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 20 21:30:50.472468 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 20 21:30:50.484139 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 20 21:30:50.484294 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 20 21:30:50.485968 systemd[1]: Stopped target network.target - Network. Mar 20 21:30:50.487570 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 20 21:30:50.487643 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 20 21:30:50.490391 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 20 21:30:50.490438 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 20 21:30:50.492410 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 20 21:30:50.492457 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 20 21:30:50.495056 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 20 21:30:50.495102 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 20 21:30:50.497286 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 20 21:30:50.499971 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 20 21:30:50.503307 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 20 21:30:50.503904 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 20 21:30:50.504026 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 20 21:30:50.507933 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 20 21:30:50.508515 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 20 21:30:50.508595 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:30:50.511169 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 20 21:30:50.511437 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 20 21:30:50.511549 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 20 21:30:50.514203 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 20 21:30:50.514749 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 20 21:30:50.514808 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:30:50.516986 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 20 21:30:50.518103 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 20 21:30:50.518154 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 21:30:50.520484 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 20 21:30:50.520530 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:30:50.522634 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 20 21:30:50.522684 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 20 21:30:50.524537 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:30:50.530441 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 20 21:30:50.541029 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 20 21:30:50.541215 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:30:50.543564 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 20 21:30:50.543645 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 20 21:30:50.545316 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 20 21:30:50.545366 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:30:50.547305 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 20 21:30:50.547358 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 20 21:30:50.549475 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 20 21:30:50.549534 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 20 21:30:50.551441 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 21:30:50.551499 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:30:50.554404 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 20 21:30:50.579112 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 20 21:30:50.579168 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:30:50.581852 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 21:30:50.581904 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:30:50.588876 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 20 21:30:50.588984 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 20 21:30:50.595476 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 20 21:30:50.595584 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 20 21:30:50.644918 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 20 21:30:50.645086 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 20 21:30:50.646280 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 20 21:30:50.647917 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 20 21:30:50.647974 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 20 21:30:50.650764 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 20 21:30:50.670275 systemd[1]: Switching root. Mar 20 21:30:50.709045 systemd-journald[193]: Journal stopped Mar 20 21:30:51.782863 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Mar 20 21:30:51.782924 kernel: SELinux: policy capability network_peer_controls=1 Mar 20 21:30:51.782945 kernel: SELinux: policy capability open_perms=1 Mar 20 21:30:51.782962 kernel: SELinux: policy capability extended_socket_class=1 Mar 20 21:30:51.782979 kernel: SELinux: policy capability always_check_network=0 Mar 20 21:30:51.782990 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 20 21:30:51.783002 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 20 21:30:51.783017 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 20 21:30:51.783029 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 20 21:30:51.783040 kernel: audit: type=1403 audit(1742506250.977:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 20 21:30:51.783058 systemd[1]: Successfully loaded SELinux policy in 41.717ms. Mar 20 21:30:51.783079 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.672ms. Mar 20 21:30:51.783093 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 21:30:51.783106 systemd[1]: Detected virtualization kvm. Mar 20 21:30:51.783121 systemd[1]: Detected architecture x86-64. Mar 20 21:30:51.783133 systemd[1]: Detected first boot. Mar 20 21:30:51.783145 systemd[1]: Initializing machine ID from VM UUID. Mar 20 21:30:51.783158 zram_generator::config[1059]: No configuration found. Mar 20 21:30:51.783171 kernel: Guest personality initialized and is inactive Mar 20 21:30:51.783188 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 20 21:30:51.783200 kernel: Initialized host personality Mar 20 21:30:51.783214 kernel: NET: Registered PF_VSOCK protocol family Mar 20 21:30:51.783226 systemd[1]: Populated /etc with preset unit settings. Mar 20 21:30:51.783239 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 20 21:30:51.783251 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 20 21:30:51.783264 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 20 21:30:51.783290 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 20 21:30:51.783303 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 20 21:30:51.783315 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 20 21:30:51.783328 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 20 21:30:51.783343 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 20 21:30:51.783355 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 20 21:30:51.783369 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 20 21:30:51.783381 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 20 21:30:51.783394 systemd[1]: Created slice user.slice - User and Session Slice. Mar 20 21:30:51.783406 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:30:51.783419 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:30:51.783432 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 20 21:30:51.783445 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 20 21:30:51.783460 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 20 21:30:51.783473 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 21:30:51.783486 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 20 21:30:51.783498 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:30:51.783510 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 20 21:30:51.783523 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 20 21:30:51.783535 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 20 21:30:51.783550 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 20 21:30:51.783563 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:30:51.783575 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 21:30:51.783588 systemd[1]: Reached target slices.target - Slice Units. Mar 20 21:30:51.783600 systemd[1]: Reached target swap.target - Swaps. Mar 20 21:30:51.783612 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 20 21:30:51.783697 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 20 21:30:51.783711 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 20 21:30:51.783724 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:30:51.783739 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 21:30:51.783751 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:30:51.783768 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 20 21:30:51.783781 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 20 21:30:51.783793 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 20 21:30:51.783843 systemd[1]: Mounting media.mount - External Media Directory... Mar 20 21:30:51.783855 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:30:51.783868 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 20 21:30:51.783880 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 20 21:30:51.783896 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 20 21:30:51.783909 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 20 21:30:51.783922 systemd[1]: Reached target machines.target - Containers. Mar 20 21:30:51.783934 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 20 21:30:51.783946 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:30:51.783959 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 21:30:51.783971 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 20 21:30:51.783983 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:30:51.783996 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 21:30:51.784011 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:30:51.784026 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 20 21:30:51.784040 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:30:51.784053 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 20 21:30:51.784066 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 20 21:30:51.784078 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 20 21:30:51.784090 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 20 21:30:51.784102 systemd[1]: Stopped systemd-fsck-usr.service. Mar 20 21:30:51.784119 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:30:51.784131 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 21:30:51.784143 kernel: fuse: init (API version 7.39) Mar 20 21:30:51.784155 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 21:30:51.784167 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 20 21:30:51.784180 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 20 21:30:51.784192 kernel: loop: module loaded Mar 20 21:30:51.784204 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 20 21:30:51.784216 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 21:30:51.784230 systemd[1]: verity-setup.service: Deactivated successfully. Mar 20 21:30:51.784242 systemd[1]: Stopped verity-setup.service. Mar 20 21:30:51.784255 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:30:51.784285 systemd-journald[1131]: Collecting audit messages is disabled. Mar 20 21:30:51.784310 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 20 21:30:51.784323 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 20 21:30:51.784335 systemd-journald[1131]: Journal started Mar 20 21:30:51.784358 systemd-journald[1131]: Runtime Journal (/run/log/journal/f866187b59314eed8cf5206da4ed10c9) is 6M, max 48.3M, 42.3M free. Mar 20 21:30:51.545786 systemd[1]: Queued start job for default target multi-user.target. Mar 20 21:30:51.560482 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 20 21:30:51.561010 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 20 21:30:51.786664 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 21:30:51.788817 systemd[1]: Mounted media.mount - External Media Directory. Mar 20 21:30:51.789953 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 20 21:30:51.791156 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 20 21:30:51.792419 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 20 21:30:51.793733 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:30:51.795236 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 20 21:30:51.795450 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 20 21:30:51.797129 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 20 21:30:51.798591 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:30:51.798973 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:30:51.800486 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:30:51.800869 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:30:51.802366 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 20 21:30:51.802572 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 20 21:30:51.803944 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:30:51.804150 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:30:51.805537 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 21:30:51.806959 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 20 21:30:51.808489 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 20 21:30:51.810026 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 20 21:30:51.821913 kernel: ACPI: bus type drm_connector registered Mar 20 21:30:51.822874 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 21:30:51.823118 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 21:30:51.827399 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 20 21:30:51.830282 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 20 21:30:51.832447 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 20 21:30:51.833565 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 20 21:30:51.833596 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 21:30:51.835710 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 20 21:30:51.849737 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 20 21:30:51.851925 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 20 21:30:51.853288 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:30:51.854571 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 20 21:30:51.859746 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 20 21:30:51.861742 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 21:30:51.865298 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 20 21:30:51.866663 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 21:30:51.868814 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:30:51.881655 systemd-journald[1131]: Time spent on flushing to /var/log/journal/f866187b59314eed8cf5206da4ed10c9 is 13.641ms for 947 entries. Mar 20 21:30:51.881655 systemd-journald[1131]: System Journal (/var/log/journal/f866187b59314eed8cf5206da4ed10c9) is 8M, max 195.6M, 187.6M free. Mar 20 21:30:51.904886 systemd-journald[1131]: Received client request to flush runtime journal. Mar 20 21:30:51.875213 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 20 21:30:51.878507 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 20 21:30:51.882809 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:30:51.897130 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 20 21:30:51.899094 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 20 21:30:51.901459 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 20 21:30:51.904541 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 20 21:30:51.908651 kernel: loop0: detected capacity change from 0 to 205544 Mar 20 21:30:51.908802 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 20 21:30:51.915340 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:30:51.919388 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 20 21:30:51.923469 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 20 21:30:51.926803 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 20 21:30:51.929690 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 20 21:30:51.939939 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 20 21:30:51.946057 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 20 21:30:51.949680 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 21:30:51.957646 kernel: loop1: detected capacity change from 0 to 109808 Mar 20 21:30:51.963134 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 20 21:30:51.982983 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Mar 20 21:30:51.983001 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Mar 20 21:30:51.988670 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:30:51.991656 kernel: loop2: detected capacity change from 0 to 151640 Mar 20 21:30:52.042647 kernel: loop3: detected capacity change from 0 to 205544 Mar 20 21:30:52.051649 kernel: loop4: detected capacity change from 0 to 109808 Mar 20 21:30:52.061650 kernel: loop5: detected capacity change from 0 to 151640 Mar 20 21:30:52.072113 (sd-merge)[1204]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 20 21:30:52.072754 (sd-merge)[1204]: Merged extensions into '/usr'. Mar 20 21:30:52.077786 systemd[1]: Reload requested from client PID 1180 ('systemd-sysext') (unit systemd-sysext.service)... Mar 20 21:30:52.077801 systemd[1]: Reloading... Mar 20 21:30:52.145729 zram_generator::config[1232]: No configuration found. Mar 20 21:30:52.197247 ldconfig[1175]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 20 21:30:52.267263 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:30:52.333258 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 20 21:30:52.333530 systemd[1]: Reloading finished in 255 ms. Mar 20 21:30:52.351460 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 20 21:30:52.353076 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 20 21:30:52.371280 systemd[1]: Starting ensure-sysext.service... Mar 20 21:30:52.373347 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 21:30:52.383147 systemd[1]: Reload requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... Mar 20 21:30:52.383163 systemd[1]: Reloading... Mar 20 21:30:52.412365 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 20 21:30:52.413057 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 20 21:30:52.414213 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 20 21:30:52.415045 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Mar 20 21:30:52.415181 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Mar 20 21:30:52.421406 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 21:30:52.421512 systemd-tmpfiles[1270]: Skipping /boot Mar 20 21:30:52.439015 zram_generator::config[1305]: No configuration found. Mar 20 21:30:52.437937 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 21:30:52.437946 systemd-tmpfiles[1270]: Skipping /boot Mar 20 21:30:52.547277 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:30:52.614889 systemd[1]: Reloading finished in 231 ms. Mar 20 21:30:52.626519 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 20 21:30:52.644398 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:30:52.654196 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 21:30:52.656866 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 20 21:30:52.665184 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 20 21:30:52.668925 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 21:30:52.672445 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:30:52.676690 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 20 21:30:52.683079 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:30:52.683251 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:30:52.684912 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:30:52.688172 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:30:52.691190 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:30:52.692431 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:30:52.692666 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:30:52.696853 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 20 21:30:52.697990 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:30:52.699581 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:30:52.700046 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:30:52.702160 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:30:52.702418 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:30:52.704367 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 20 21:30:52.711893 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:30:52.712149 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:30:52.718725 systemd-udevd[1345]: Using default interface naming scheme 'v255'. Mar 20 21:30:52.728328 augenrules[1371]: No rules Mar 20 21:30:52.730029 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 21:30:52.730535 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 21:30:52.733086 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 20 21:30:52.736850 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:30:52.737159 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:30:52.739156 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:30:52.742517 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 21:30:52.747849 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:30:52.758969 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:30:52.760257 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:30:52.760417 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:30:52.762088 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 20 21:30:52.763776 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:30:52.765991 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:30:52.768607 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 20 21:30:52.773687 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 20 21:30:52.776320 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:30:52.776553 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:30:52.781008 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 21:30:52.781682 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 21:30:52.783287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:30:52.783513 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:30:52.785305 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:30:52.785573 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:30:52.787449 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 20 21:30:52.793915 systemd[1]: Finished ensure-sysext.service. Mar 20 21:30:52.809832 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 21:30:52.811348 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 21:30:52.811428 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 21:30:52.813467 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 20 21:30:52.814835 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 20 21:30:52.834658 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1404) Mar 20 21:30:52.866748 systemd-resolved[1341]: Positive Trust Anchors: Mar 20 21:30:52.866763 systemd-resolved[1341]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 21:30:52.866795 systemd-resolved[1341]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 21:30:52.871479 systemd-resolved[1341]: Defaulting to hostname 'linux'. Mar 20 21:30:52.873463 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 21:30:52.874908 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 20 21:30:52.874955 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:30:52.906639 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 20 21:30:52.905014 systemd-networkd[1415]: lo: Link UP Mar 20 21:30:52.905019 systemd-networkd[1415]: lo: Gained carrier Mar 20 21:30:52.905986 systemd-networkd[1415]: Enumeration completed Mar 20 21:30:52.906069 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 21:30:52.907286 systemd[1]: Reached target network.target - Network. Mar 20 21:30:52.910712 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 20 21:30:52.911656 kernel: ACPI: button: Power Button [PWRF] Mar 20 21:30:52.914782 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 20 21:30:52.922079 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 21:30:52.924912 systemd-networkd[1415]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:30:52.924925 systemd-networkd[1415]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 21:30:52.925738 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 20 21:30:52.926321 systemd-networkd[1415]: eth0: Link UP Mar 20 21:30:52.926331 systemd-networkd[1415]: eth0: Gained carrier Mar 20 21:30:52.926346 systemd-networkd[1415]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:30:52.930771 systemd[1]: Reached target time-set.target - System Time Set. Mar 20 21:30:52.935930 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 20 21:30:52.936732 systemd-networkd[1415]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 21:30:52.937250 systemd-timesyncd[1416]: Network configuration changed, trying to establish connection. Mar 20 21:30:54.269207 systemd-timesyncd[1416]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 20 21:30:54.269285 systemd-timesyncd[1416]: Initial clock synchronization to Thu 2025-03-20 21:30:54.269087 UTC. Mar 20 21:30:54.269698 systemd-resolved[1341]: Clock change detected. Flushing caches. Mar 20 21:30:54.278869 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 20 21:30:54.284435 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 20 21:30:54.288487 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 20 21:30:54.293731 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 20 21:30:54.298551 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 20 21:30:54.298947 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 20 21:30:54.329921 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:30:54.361619 kernel: mousedev: PS/2 mouse device common for all mice Mar 20 21:30:54.388066 kernel: kvm_amd: TSC scaling supported Mar 20 21:30:54.388160 kernel: kvm_amd: Nested Virtualization enabled Mar 20 21:30:54.388174 kernel: kvm_amd: Nested Paging enabled Mar 20 21:30:54.389084 kernel: kvm_amd: LBR virtualization supported Mar 20 21:30:54.389123 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 20 21:30:54.389646 kernel: kvm_amd: Virtual GIF supported Mar 20 21:30:54.408635 kernel: EDAC MC: Ver: 3.0.0 Mar 20 21:30:54.445321 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 20 21:30:54.462741 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:30:54.466385 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 20 21:30:54.486147 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 21:30:54.520976 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 20 21:30:54.522566 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:30:54.523700 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 21:30:54.524855 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 20 21:30:54.526092 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 20 21:30:54.527508 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 20 21:30:54.528705 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 20 21:30:54.529948 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 20 21:30:54.531174 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 20 21:30:54.531200 systemd[1]: Reached target paths.target - Path Units. Mar 20 21:30:54.532092 systemd[1]: Reached target timers.target - Timer Units. Mar 20 21:30:54.534175 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 20 21:30:54.537485 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 20 21:30:54.541200 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 20 21:30:54.542706 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 20 21:30:54.544006 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 20 21:30:54.547951 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 20 21:30:54.549496 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 20 21:30:54.552023 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 20 21:30:54.553789 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 20 21:30:54.554985 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 21:30:54.556005 systemd[1]: Reached target basic.target - Basic System. Mar 20 21:30:54.557069 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 20 21:30:54.557107 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 20 21:30:54.558230 systemd[1]: Starting containerd.service - containerd container runtime... Mar 20 21:30:54.559852 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 21:30:54.560402 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 20 21:30:54.564841 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 20 21:30:54.567643 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 20 21:30:54.568926 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 20 21:30:54.571471 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 20 21:30:54.573024 jq[1452]: false Mar 20 21:30:54.574853 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 20 21:30:54.579722 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 20 21:30:54.584824 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 20 21:30:54.586849 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 20 21:30:54.587298 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 20 21:30:54.589512 systemd[1]: Starting update-engine.service - Update Engine... Mar 20 21:30:54.593533 extend-filesystems[1453]: Found loop3 Mar 20 21:30:54.601128 extend-filesystems[1453]: Found loop4 Mar 20 21:30:54.601128 extend-filesystems[1453]: Found loop5 Mar 20 21:30:54.601128 extend-filesystems[1453]: Found sr0 Mar 20 21:30:54.601128 extend-filesystems[1453]: Found vda Mar 20 21:30:54.601128 extend-filesystems[1453]: Found vda1 Mar 20 21:30:54.601128 extend-filesystems[1453]: Found vda2 Mar 20 21:30:54.601128 extend-filesystems[1453]: Found vda3 Mar 20 21:30:54.601128 extend-filesystems[1453]: Found usr Mar 20 21:30:54.601128 extend-filesystems[1453]: Found vda4 Mar 20 21:30:54.601128 extend-filesystems[1453]: Found vda6 Mar 20 21:30:54.601128 extend-filesystems[1453]: Found vda7 Mar 20 21:30:54.601128 extend-filesystems[1453]: Found vda9 Mar 20 21:30:54.601128 extend-filesystems[1453]: Checking size of /dev/vda9 Mar 20 21:30:54.599370 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 20 21:30:54.596007 dbus-daemon[1451]: [system] SELinux support is enabled Mar 20 21:30:54.601841 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 20 21:30:54.608038 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 20 21:30:54.609406 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 20 21:30:54.609673 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 20 21:30:54.615777 jq[1467]: true Mar 20 21:30:54.609988 systemd[1]: motdgen.service: Deactivated successfully. Mar 20 21:30:54.616551 extend-filesystems[1453]: Resized partition /dev/vda9 Mar 20 21:30:54.610216 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 20 21:30:54.623878 update_engine[1461]: I20250320 21:30:54.618345 1461 main.cc:92] Flatcar Update Engine starting Mar 20 21:30:54.624064 extend-filesystems[1471]: resize2fs 1.47.2 (1-Jan-2025) Mar 20 21:30:54.614050 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 20 21:30:54.615014 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 20 21:30:54.624159 (ntainerd)[1474]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 20 21:30:54.631129 jq[1472]: true Mar 20 21:30:54.632075 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 20 21:30:54.633399 update_engine[1461]: I20250320 21:30:54.633344 1461 update_check_scheduler.cc:74] Next update check in 6m41s Mar 20 21:30:54.636571 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 20 21:30:54.637645 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 20 21:30:54.639713 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 20 21:30:54.639736 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 20 21:30:54.642156 systemd[1]: Started update-engine.service - Update Engine. Mar 20 21:30:54.647605 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1398) Mar 20 21:30:54.653788 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 20 21:30:54.676609 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 20 21:30:54.702474 extend-filesystems[1471]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 20 21:30:54.702474 extend-filesystems[1471]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 20 21:30:54.702474 extend-filesystems[1471]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 20 21:30:54.713115 extend-filesystems[1453]: Resized filesystem in /dev/vda9 Mar 20 21:30:54.704557 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 20 21:30:54.704872 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 20 21:30:54.705992 systemd-logind[1459]: Watching system buttons on /dev/input/event1 (Power Button) Mar 20 21:30:54.706019 systemd-logind[1459]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 20 21:30:54.706433 systemd-logind[1459]: New seat seat0. Mar 20 21:30:54.709888 systemd[1]: Started systemd-logind.service - User Login Management. Mar 20 21:30:54.717278 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Mar 20 21:30:54.718783 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 20 21:30:54.719116 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 20 21:30:54.721876 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 20 21:30:54.824955 containerd[1474]: time="2025-03-20T21:30:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 20 21:30:54.825959 containerd[1474]: time="2025-03-20T21:30:54.825904858Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 20 21:30:54.835642 containerd[1474]: time="2025-03-20T21:30:54.835554591Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.464µs" Mar 20 21:30:54.835642 containerd[1474]: time="2025-03-20T21:30:54.835627017Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 20 21:30:54.835736 containerd[1474]: time="2025-03-20T21:30:54.835653958Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 20 21:30:54.835892 containerd[1474]: time="2025-03-20T21:30:54.835862188Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 20 21:30:54.835892 containerd[1474]: time="2025-03-20T21:30:54.835884109Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 20 21:30:54.835957 containerd[1474]: time="2025-03-20T21:30:54.835913494Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 21:30:54.836018 containerd[1474]: time="2025-03-20T21:30:54.835986661Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 21:30:54.836018 containerd[1474]: time="2025-03-20T21:30:54.836008082Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 21:30:54.836336 containerd[1474]: time="2025-03-20T21:30:54.836304978Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 21:30:54.836336 containerd[1474]: time="2025-03-20T21:30:54.836324845Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 21:30:54.836382 containerd[1474]: time="2025-03-20T21:30:54.836339212Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 21:30:54.836382 containerd[1474]: time="2025-03-20T21:30:54.836351295Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 20 21:30:54.836469 containerd[1474]: time="2025-03-20T21:30:54.836441825Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 20 21:30:54.836740 containerd[1474]: time="2025-03-20T21:30:54.836708876Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 21:30:54.836778 containerd[1474]: time="2025-03-20T21:30:54.836748800Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 21:30:54.836778 containerd[1474]: time="2025-03-20T21:30:54.836759400Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 20 21:30:54.836818 containerd[1474]: time="2025-03-20T21:30:54.836789276Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 20 21:30:54.837082 containerd[1474]: time="2025-03-20T21:30:54.837049354Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 20 21:30:54.837154 containerd[1474]: time="2025-03-20T21:30:54.837123333Z" level=info msg="metadata content store policy set" policy=shared Mar 20 21:30:54.843616 containerd[1474]: time="2025-03-20T21:30:54.843556966Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 20 21:30:54.843658 containerd[1474]: time="2025-03-20T21:30:54.843621567Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 20 21:30:54.843658 containerd[1474]: time="2025-03-20T21:30:54.843637547Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 20 21:30:54.843658 containerd[1474]: time="2025-03-20T21:30:54.843655300Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 20 21:30:54.843738 containerd[1474]: time="2025-03-20T21:30:54.843667884Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 20 21:30:54.843738 containerd[1474]: time="2025-03-20T21:30:54.843679285Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 20 21:30:54.843738 containerd[1474]: time="2025-03-20T21:30:54.843691458Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 20 21:30:54.843738 containerd[1474]: time="2025-03-20T21:30:54.843704753Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 20 21:30:54.843738 containerd[1474]: time="2025-03-20T21:30:54.843715934Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 20 21:30:54.843738 containerd[1474]: time="2025-03-20T21:30:54.843726554Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 20 21:30:54.843738 containerd[1474]: time="2025-03-20T21:30:54.843736092Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 20 21:30:54.843866 containerd[1474]: time="2025-03-20T21:30:54.843747814Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 20 21:30:54.843866 containerd[1474]: time="2025-03-20T21:30:54.843859413Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 20 21:30:54.843910 sshd_keygen[1485]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 20 21:30:54.844183 containerd[1474]: time="2025-03-20T21:30:54.843877907Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 20 21:30:54.844183 containerd[1474]: time="2025-03-20T21:30:54.843890912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 20 21:30:54.844183 containerd[1474]: time="2025-03-20T21:30:54.843901782Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 20 21:30:54.844183 containerd[1474]: time="2025-03-20T21:30:54.843912372Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 20 21:30:54.844183 containerd[1474]: time="2025-03-20T21:30:54.843938882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 20 21:30:54.844183 containerd[1474]: time="2025-03-20T21:30:54.843950784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 20 21:30:54.844183 containerd[1474]: time="2025-03-20T21:30:54.843960943Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 20 21:30:54.844183 containerd[1474]: time="2025-03-20T21:30:54.843971693Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 20 21:30:54.844183 containerd[1474]: time="2025-03-20T21:30:54.843982614Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 20 21:30:54.844183 containerd[1474]: time="2025-03-20T21:30:54.843993344Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 20 21:30:54.844183 containerd[1474]: time="2025-03-20T21:30:54.844049830Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 20 21:30:54.844183 containerd[1474]: time="2025-03-20T21:30:54.844061351Z" level=info msg="Start snapshots syncer" Mar 20 21:30:54.844183 containerd[1474]: time="2025-03-20T21:30:54.844084174Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 20 21:30:54.844432 containerd[1474]: time="2025-03-20T21:30:54.844327851Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 20 21:30:54.844432 containerd[1474]: time="2025-03-20T21:30:54.844397612Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 20 21:30:54.844554 containerd[1474]: time="2025-03-20T21:30:54.844458476Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 20 21:30:54.844576 containerd[1474]: time="2025-03-20T21:30:54.844554546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 20 21:30:54.844618 containerd[1474]: time="2025-03-20T21:30:54.844573522Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 20 21:30:54.844618 containerd[1474]: time="2025-03-20T21:30:54.844610491Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 20 21:30:54.844663 containerd[1474]: time="2025-03-20T21:30:54.844621542Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 20 21:30:54.844663 containerd[1474]: time="2025-03-20T21:30:54.844651899Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 20 21:30:54.844663 containerd[1474]: time="2025-03-20T21:30:54.844662048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 20 21:30:54.844718 containerd[1474]: time="2025-03-20T21:30:54.844673018Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 20 21:30:54.844718 containerd[1474]: time="2025-03-20T21:30:54.844694108Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 20 21:30:54.844718 containerd[1474]: time="2025-03-20T21:30:54.844706010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 20 21:30:54.844718 containerd[1474]: time="2025-03-20T21:30:54.844715127Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 20 21:30:54.844793 containerd[1474]: time="2025-03-20T21:30:54.844747728Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 21:30:54.844793 containerd[1474]: time="2025-03-20T21:30:54.844760502Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 21:30:54.844793 containerd[1474]: time="2025-03-20T21:30:54.844769569Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 21:30:54.844793 containerd[1474]: time="2025-03-20T21:30:54.844778847Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 21:30:54.844793 containerd[1474]: time="2025-03-20T21:30:54.844787433Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 20 21:30:54.844883 containerd[1474]: time="2025-03-20T21:30:54.844797371Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 20 21:30:54.844883 containerd[1474]: time="2025-03-20T21:30:54.844808863Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 20 21:30:54.844883 containerd[1474]: time="2025-03-20T21:30:54.844833068Z" level=info msg="runtime interface created" Mar 20 21:30:54.844883 containerd[1474]: time="2025-03-20T21:30:54.844838939Z" level=info msg="created NRI interface" Mar 20 21:30:54.844883 containerd[1474]: time="2025-03-20T21:30:54.844855781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 20 21:30:54.844883 containerd[1474]: time="2025-03-20T21:30:54.844866100Z" level=info msg="Connect containerd service" Mar 20 21:30:54.844989 containerd[1474]: time="2025-03-20T21:30:54.844888082Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 20 21:30:54.846192 containerd[1474]: time="2025-03-20T21:30:54.846147844Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 21:30:54.866499 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 20 21:30:54.871285 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 20 21:30:54.890362 systemd[1]: issuegen.service: Deactivated successfully. Mar 20 21:30:54.890663 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 20 21:30:54.893509 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 20 21:30:54.916983 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 20 21:30:54.920079 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 20 21:30:54.922501 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 20 21:30:54.923754 systemd[1]: Reached target getty.target - Login Prompts. Mar 20 21:30:54.934853 containerd[1474]: time="2025-03-20T21:30:54.934816397Z" level=info msg="Start subscribing containerd event" Mar 20 21:30:54.934904 containerd[1474]: time="2025-03-20T21:30:54.934880137Z" level=info msg="Start recovering state" Mar 20 21:30:54.935029 containerd[1474]: time="2025-03-20T21:30:54.934980665Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 20 21:30:54.935186 containerd[1474]: time="2025-03-20T21:30:54.935007866Z" level=info msg="Start event monitor" Mar 20 21:30:54.935186 containerd[1474]: time="2025-03-20T21:30:54.935186922Z" level=info msg="Start cni network conf syncer for default" Mar 20 21:30:54.935272 containerd[1474]: time="2025-03-20T21:30:54.935197381Z" level=info msg="Start streaming server" Mar 20 21:30:54.935272 containerd[1474]: time="2025-03-20T21:30:54.935197482Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 20 21:30:54.935272 containerd[1474]: time="2025-03-20T21:30:54.935213522Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 20 21:30:54.935272 containerd[1474]: time="2025-03-20T21:30:54.935221627Z" level=info msg="runtime interface starting up..." Mar 20 21:30:54.935272 containerd[1474]: time="2025-03-20T21:30:54.935227558Z" level=info msg="starting plugins..." Mar 20 21:30:54.935272 containerd[1474]: time="2025-03-20T21:30:54.935243808Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 20 21:30:54.935421 containerd[1474]: time="2025-03-20T21:30:54.935394090Z" level=info msg="containerd successfully booted in 0.111226s" Mar 20 21:30:54.935455 systemd[1]: Started containerd.service - containerd container runtime. Mar 20 21:30:56.099844 systemd-networkd[1415]: eth0: Gained IPv6LL Mar 20 21:30:56.103503 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 20 21:30:56.105325 systemd[1]: Reached target network-online.target - Network is Online. Mar 20 21:30:56.107970 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 20 21:30:56.110467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:30:56.112745 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 20 21:30:56.139652 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 20 21:30:56.140036 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 20 21:30:56.141918 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 20 21:30:56.144210 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 20 21:30:56.761054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:30:56.762974 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 20 21:30:56.766665 systemd[1]: Startup finished in 671ms (kernel) + 5.280s (initrd) + 4.499s (userspace) = 10.451s. Mar 20 21:30:56.802980 (kubelet)[1570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 21:30:57.197538 kubelet[1570]: E0320 21:30:57.197351 1570 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 21:30:57.201214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 21:30:57.201416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 21:30:57.201795 systemd[1]: kubelet.service: Consumed 936ms CPU time, 237.9M memory peak. Mar 20 21:30:59.971930 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 20 21:30:59.973214 systemd[1]: Started sshd@0-10.0.0.139:22-10.0.0.1:34928.service - OpenSSH per-connection server daemon (10.0.0.1:34928). Mar 20 21:31:00.097480 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 34928 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:31:00.099365 sshd-session[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:31:00.110096 systemd-logind[1459]: New session 1 of user core. Mar 20 21:31:00.111611 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 20 21:31:00.112874 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 20 21:31:00.136417 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 20 21:31:00.138992 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 20 21:31:00.152958 (systemd)[1587]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 20 21:31:00.155176 systemd-logind[1459]: New session c1 of user core. Mar 20 21:31:00.294986 systemd[1587]: Queued start job for default target default.target. Mar 20 21:31:00.308994 systemd[1587]: Created slice app.slice - User Application Slice. Mar 20 21:31:00.309028 systemd[1587]: Reached target paths.target - Paths. Mar 20 21:31:00.309069 systemd[1587]: Reached target timers.target - Timers. Mar 20 21:31:00.310714 systemd[1587]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 20 21:31:00.321224 systemd[1587]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 20 21:31:00.321348 systemd[1587]: Reached target sockets.target - Sockets. Mar 20 21:31:00.321389 systemd[1587]: Reached target basic.target - Basic System. Mar 20 21:31:00.321431 systemd[1587]: Reached target default.target - Main User Target. Mar 20 21:31:00.321472 systemd[1587]: Startup finished in 160ms. Mar 20 21:31:00.321800 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 20 21:31:00.323426 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 20 21:31:00.386956 systemd[1]: Started sshd@1-10.0.0.139:22-10.0.0.1:34936.service - OpenSSH per-connection server daemon (10.0.0.1:34936). Mar 20 21:31:00.435797 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 34936 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:31:00.437192 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:31:00.441149 systemd-logind[1459]: New session 2 of user core. Mar 20 21:31:00.450726 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 20 21:31:00.503397 sshd[1600]: Connection closed by 10.0.0.1 port 34936 Mar 20 21:31:00.503707 sshd-session[1598]: pam_unix(sshd:session): session closed for user core Mar 20 21:31:00.514230 systemd[1]: sshd@1-10.0.0.139:22-10.0.0.1:34936.service: Deactivated successfully. Mar 20 21:31:00.516176 systemd[1]: session-2.scope: Deactivated successfully. Mar 20 21:31:00.517433 systemd-logind[1459]: Session 2 logged out. Waiting for processes to exit. Mar 20 21:31:00.518649 systemd[1]: Started sshd@2-10.0.0.139:22-10.0.0.1:34942.service - OpenSSH per-connection server daemon (10.0.0.1:34942). Mar 20 21:31:00.519291 systemd-logind[1459]: Removed session 2. Mar 20 21:31:00.572119 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 34942 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:31:00.573483 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:31:00.577397 systemd-logind[1459]: New session 3 of user core. Mar 20 21:31:00.586696 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 20 21:31:00.634825 sshd[1608]: Connection closed by 10.0.0.1 port 34942 Mar 20 21:31:00.635176 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Mar 20 21:31:00.649894 systemd[1]: sshd@2-10.0.0.139:22-10.0.0.1:34942.service: Deactivated successfully. Mar 20 21:31:00.651421 systemd[1]: session-3.scope: Deactivated successfully. Mar 20 21:31:00.652984 systemd-logind[1459]: Session 3 logged out. Waiting for processes to exit. Mar 20 21:31:00.654159 systemd[1]: Started sshd@3-10.0.0.139:22-10.0.0.1:34948.service - OpenSSH per-connection server daemon (10.0.0.1:34948). Mar 20 21:31:00.654881 systemd-logind[1459]: Removed session 3. Mar 20 21:31:00.698862 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 34948 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:31:00.700235 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:31:00.704360 systemd-logind[1459]: New session 4 of user core. Mar 20 21:31:00.718711 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 20 21:31:00.772992 sshd[1616]: Connection closed by 10.0.0.1 port 34948 Mar 20 21:31:00.773262 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Mar 20 21:31:00.790253 systemd[1]: sshd@3-10.0.0.139:22-10.0.0.1:34948.service: Deactivated successfully. Mar 20 21:31:00.792099 systemd[1]: session-4.scope: Deactivated successfully. Mar 20 21:31:00.793606 systemd-logind[1459]: Session 4 logged out. Waiting for processes to exit. Mar 20 21:31:00.794890 systemd[1]: Started sshd@4-10.0.0.139:22-10.0.0.1:34956.service - OpenSSH per-connection server daemon (10.0.0.1:34956). Mar 20 21:31:00.795618 systemd-logind[1459]: Removed session 4. Mar 20 21:31:00.840064 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 34956 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:31:00.841319 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:31:00.845410 systemd-logind[1459]: New session 5 of user core. Mar 20 21:31:00.857706 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 20 21:31:00.915468 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 20 21:31:00.915826 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:31:00.932046 sudo[1625]: pam_unix(sudo:session): session closed for user root Mar 20 21:31:00.933490 sshd[1624]: Connection closed by 10.0.0.1 port 34956 Mar 20 21:31:00.933912 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Mar 20 21:31:00.949295 systemd[1]: sshd@4-10.0.0.139:22-10.0.0.1:34956.service: Deactivated successfully. Mar 20 21:31:00.951090 systemd[1]: session-5.scope: Deactivated successfully. Mar 20 21:31:00.952453 systemd-logind[1459]: Session 5 logged out. Waiting for processes to exit. Mar 20 21:31:00.953739 systemd[1]: Started sshd@5-10.0.0.139:22-10.0.0.1:34970.service - OpenSSH per-connection server daemon (10.0.0.1:34970). Mar 20 21:31:00.954466 systemd-logind[1459]: Removed session 5. Mar 20 21:31:00.998411 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 34970 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:31:00.999982 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:31:01.004278 systemd-logind[1459]: New session 6 of user core. Mar 20 21:31:01.013714 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 20 21:31:01.068215 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 20 21:31:01.068564 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:31:01.072700 sudo[1635]: pam_unix(sudo:session): session closed for user root Mar 20 21:31:01.078962 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 20 21:31:01.079266 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:31:01.089141 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 21:31:01.132204 augenrules[1657]: No rules Mar 20 21:31:01.134088 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 21:31:01.134362 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 21:31:01.135751 sudo[1634]: pam_unix(sudo:session): session closed for user root Mar 20 21:31:01.137181 sshd[1633]: Connection closed by 10.0.0.1 port 34970 Mar 20 21:31:01.137538 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Mar 20 21:31:01.149299 systemd[1]: sshd@5-10.0.0.139:22-10.0.0.1:34970.service: Deactivated successfully. Mar 20 21:31:01.151141 systemd[1]: session-6.scope: Deactivated successfully. Mar 20 21:31:01.151977 systemd-logind[1459]: Session 6 logged out. Waiting for processes to exit. Mar 20 21:31:01.154230 systemd[1]: Started sshd@6-10.0.0.139:22-10.0.0.1:34974.service - OpenSSH per-connection server daemon (10.0.0.1:34974). Mar 20 21:31:01.154958 systemd-logind[1459]: Removed session 6. Mar 20 21:31:01.210915 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 34974 ssh2: RSA SHA256:KJ7ck8imsv1/sWVS7eR1M7V7NSskkAYjKibngyOtAC0 Mar 20 21:31:01.212524 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:31:01.217466 systemd-logind[1459]: New session 7 of user core. Mar 20 21:31:01.228698 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 20 21:31:01.281967 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 20 21:31:01.282293 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:31:01.295076 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 20 21:31:01.326336 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 20 21:31:01.326629 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 20 21:31:02.173326 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:31:02.173502 systemd[1]: kubelet.service: Consumed 936ms CPU time, 237.9M memory peak. Mar 20 21:31:02.175744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:31:02.204404 systemd[1]: Reload requested from client PID 1710 ('systemctl') (unit session-7.scope)... Mar 20 21:31:02.204419 systemd[1]: Reloading... Mar 20 21:31:02.280933 zram_generator::config[1751]: No configuration found. Mar 20 21:31:02.761087 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:31:02.863542 systemd[1]: Reloading finished in 658 ms. Mar 20 21:31:02.926703 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:31:02.929236 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 21:31:02.929496 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:31:02.929536 systemd[1]: kubelet.service: Consumed 162ms CPU time, 83.6M memory peak. Mar 20 21:31:02.931042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:31:03.102677 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:31:03.113906 (kubelet)[1803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 21:31:03.170039 kubelet[1803]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:31:03.170039 kubelet[1803]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 21:31:03.170039 kubelet[1803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:31:03.171130 kubelet[1803]: I0320 21:31:03.171086 1803 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 21:31:03.448223 kubelet[1803]: I0320 21:31:03.448126 1803 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 20 21:31:03.448223 kubelet[1803]: I0320 21:31:03.448153 1803 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 21:31:03.448410 kubelet[1803]: I0320 21:31:03.448381 1803 server.go:929] "Client rotation is on, will bootstrap in background" Mar 20 21:31:03.467761 kubelet[1803]: I0320 21:31:03.467719 1803 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 21:31:03.482109 kubelet[1803]: I0320 21:31:03.482073 1803 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 21:31:03.489029 kubelet[1803]: I0320 21:31:03.488994 1803 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 21:31:03.489913 kubelet[1803]: I0320 21:31:03.489888 1803 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 20 21:31:03.490069 kubelet[1803]: I0320 21:31:03.490030 1803 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 21:31:03.490235 kubelet[1803]: I0320 21:31:03.490059 1803 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.139","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 21:31:03.490235 kubelet[1803]: I0320 21:31:03.490232 1803 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 21:31:03.490374 kubelet[1803]: I0320 21:31:03.490239 1803 container_manager_linux.go:300] "Creating device plugin manager" Mar 20 21:31:03.490374 kubelet[1803]: I0320 21:31:03.490350 1803 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:31:03.491282 kubelet[1803]: I0320 21:31:03.491262 1803 kubelet.go:408] "Attempting to sync node with API server" Mar 20 21:31:03.491282 kubelet[1803]: I0320 21:31:03.491279 1803 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 21:31:03.491333 kubelet[1803]: I0320 21:31:03.491317 1803 kubelet.go:314] "Adding apiserver pod source" Mar 20 21:31:03.491333 kubelet[1803]: I0320 21:31:03.491332 1803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 21:31:03.491760 kubelet[1803]: E0320 21:31:03.491677 1803 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:03.491760 kubelet[1803]: E0320 21:31:03.491718 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:03.494666 kubelet[1803]: I0320 21:31:03.494643 1803 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 21:31:03.506093 kubelet[1803]: I0320 21:31:03.505504 1803 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 21:31:03.507454 kubelet[1803]: W0320 21:31:03.507426 1803 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 20 21:31:03.508200 kubelet[1803]: I0320 21:31:03.508178 1803 server.go:1269] "Started kubelet" Mar 20 21:31:03.509406 kubelet[1803]: I0320 21:31:03.508326 1803 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 21:31:03.509406 kubelet[1803]: I0320 21:31:03.508847 1803 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 21:31:03.509406 kubelet[1803]: I0320 21:31:03.508915 1803 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 21:31:03.510259 kubelet[1803]: I0320 21:31:03.509657 1803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 21:31:03.510259 kubelet[1803]: I0320 21:31:03.510004 1803 server.go:460] "Adding debug handlers to kubelet server" Mar 20 21:31:03.511005 kubelet[1803]: I0320 21:31:03.510690 1803 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 20 21:31:03.511109 kubelet[1803]: I0320 21:31:03.511089 1803 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 20 21:31:03.511757 kubelet[1803]: I0320 21:31:03.511237 1803 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 20 21:31:03.511757 kubelet[1803]: I0320 21:31:03.511280 1803 reconciler.go:26] "Reconciler: start to sync state" Mar 20 21:31:03.511757 kubelet[1803]: E0320 21:31:03.511482 1803 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 20 21:31:03.511858 kubelet[1803]: I0320 21:31:03.511834 1803 factory.go:221] Registration of the systemd container factory successfully Mar 20 21:31:03.511982 kubelet[1803]: I0320 21:31:03.511909 1803 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 21:31:03.512684 kubelet[1803]: E0320 21:31:03.512415 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:03.513081 kubelet[1803]: I0320 21:31:03.513055 1803 factory.go:221] Registration of the containerd container factory successfully Mar 20 21:31:03.520249 kubelet[1803]: E0320 21:31:03.520203 1803 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.139\" not found" node="10.0.0.139" Mar 20 21:31:03.526320 kubelet[1803]: I0320 21:31:03.526294 1803 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 20 21:31:03.526320 kubelet[1803]: I0320 21:31:03.526311 1803 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 20 21:31:03.526320 kubelet[1803]: I0320 21:31:03.526328 1803 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:31:03.612543 kubelet[1803]: E0320 21:31:03.612499 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:03.714992 kubelet[1803]: E0320 21:31:03.714847 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:03.815772 kubelet[1803]: E0320 21:31:03.815700 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:03.914725 kubelet[1803]: E0320 21:31:03.914673 1803 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.139" not found Mar 20 21:31:03.916862 kubelet[1803]: E0320 21:31:03.916813 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:04.017353 kubelet[1803]: E0320 21:31:04.017187 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:04.117938 kubelet[1803]: E0320 21:31:04.117892 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:04.218552 kubelet[1803]: E0320 21:31:04.218497 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:04.275236 kubelet[1803]: E0320 21:31:04.275152 1803 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.139" not found Mar 20 21:31:04.319596 kubelet[1803]: E0320 21:31:04.319553 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:04.364314 kubelet[1803]: I0320 21:31:04.364259 1803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 21:31:04.365508 kubelet[1803]: I0320 21:31:04.365483 1803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 21:31:04.365571 kubelet[1803]: I0320 21:31:04.365524 1803 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 21:31:04.365571 kubelet[1803]: I0320 21:31:04.365543 1803 kubelet.go:2321] "Starting kubelet main sync loop" Mar 20 21:31:04.365633 kubelet[1803]: E0320 21:31:04.365591 1803 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 21:31:04.380916 kubelet[1803]: I0320 21:31:04.380880 1803 policy_none.go:49] "None policy: Start" Mar 20 21:31:04.382137 kubelet[1803]: I0320 21:31:04.382109 1803 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 21:31:04.382137 kubelet[1803]: I0320 21:31:04.382135 1803 state_mem.go:35] "Initializing new in-memory state store" Mar 20 21:31:04.420382 kubelet[1803]: E0320 21:31:04.420306 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:04.424720 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 20 21:31:04.440086 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 20 21:31:04.443482 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 20 21:31:04.450369 kubelet[1803]: I0320 21:31:04.450345 1803 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 20 21:31:04.450494 kubelet[1803]: W0320 21:31:04.450475 1803 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 20 21:31:04.450494 kubelet[1803]: W0320 21:31:04.450479 1803 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 20 21:31:04.450543 kubelet[1803]: W0320 21:31:04.450511 1803 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 20 21:31:04.450617 kubelet[1803]: W0320 21:31:04.450564 1803 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 20 21:31:04.456562 kubelet[1803]: I0320 21:31:04.456530 1803 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 21:31:04.456802 kubelet[1803]: I0320 21:31:04.456777 1803 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 21:31:04.456859 kubelet[1803]: I0320 21:31:04.456790 1803 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 21:31:04.457597 kubelet[1803]: I0320 21:31:04.457085 1803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 21:31:04.458110 kubelet[1803]: E0320 21:31:04.457920 1803 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.139\" not found" Mar 20 21:31:04.492600 kubelet[1803]: E0320 21:31:04.492543 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:04.492600 kubelet[1803]: I0320 21:31:04.492568 1803 apiserver.go:52] "Watching apiserver" Mar 20 21:31:04.512034 kubelet[1803]: I0320 21:31:04.512002 1803 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 20 21:31:04.558456 kubelet[1803]: I0320 21:31:04.558352 1803 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.139" Mar 20 21:31:04.563636 kubelet[1803]: I0320 21:31:04.563575 1803 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.139" Mar 20 21:31:04.563636 kubelet[1803]: E0320 21:31:04.563626 1803 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.139\": node \"10.0.0.139\" not found" Mar 20 21:31:04.573199 kubelet[1803]: E0320 21:31:04.573156 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:04.584503 systemd[1]: Created slice kubepods-besteffort-pod23077ba2_d5b4_4649_8a80_7f59f2f48946.slice - libcontainer container kubepods-besteffort-pod23077ba2_d5b4_4649_8a80_7f59f2f48946.slice. Mar 20 21:31:04.612718 systemd[1]: Created slice kubepods-burstable-pod0cd73910_5b7f_4ea9_871e_85e6f55f7cb3.slice - libcontainer container kubepods-burstable-pod0cd73910_5b7f_4ea9_871e_85e6f55f7cb3.slice. Mar 20 21:31:04.618186 kubelet[1803]: I0320 21:31:04.618145 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-lib-modules\") pod \"cilium-pxv2x\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " pod="kube-system/cilium-pxv2x" Mar 20 21:31:04.618186 kubelet[1803]: I0320 21:31:04.618181 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-xtables-lock\") pod \"cilium-pxv2x\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " pod="kube-system/cilium-pxv2x" Mar 20 21:31:04.618360 kubelet[1803]: I0320 21:31:04.618203 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-cni-path\") pod \"cilium-pxv2x\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " pod="kube-system/cilium-pxv2x" Mar 20 21:31:04.618360 kubelet[1803]: I0320 21:31:04.618226 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-etc-cni-netd\") pod \"cilium-pxv2x\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " pod="kube-system/cilium-pxv2x" Mar 20 21:31:04.618360 kubelet[1803]: I0320 21:31:04.618248 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-cilium-config-path\") pod \"cilium-pxv2x\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " pod="kube-system/cilium-pxv2x" Mar 20 21:31:04.618360 kubelet[1803]: I0320 21:31:04.618268 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-host-proc-sys-net\") pod \"cilium-pxv2x\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " pod="kube-system/cilium-pxv2x" Mar 20 21:31:04.618360 kubelet[1803]: I0320 21:31:04.618287 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/23077ba2-d5b4-4649-8a80-7f59f2f48946-kube-proxy\") pod \"kube-proxy-vrpbq\" (UID: \"23077ba2-d5b4-4649-8a80-7f59f2f48946\") " pod="kube-system/kube-proxy-vrpbq" Mar 20 21:31:04.618360 kubelet[1803]: I0320 21:31:04.618307 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23077ba2-d5b4-4649-8a80-7f59f2f48946-xtables-lock\") pod \"kube-proxy-vrpbq\" (UID: \"23077ba2-d5b4-4649-8a80-7f59f2f48946\") " pod="kube-system/kube-proxy-vrpbq" Mar 20 21:31:04.618548 kubelet[1803]: I0320 21:31:04.618342 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-hostproc\") pod \"cilium-pxv2x\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " pod="kube-system/cilium-pxv2x" Mar 20 21:31:04.618548 kubelet[1803]: I0320 21:31:04.618364 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-cilium-cgroup\") pod \"cilium-pxv2x\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " pod="kube-system/cilium-pxv2x" Mar 20 21:31:04.618548 kubelet[1803]: I0320 21:31:04.618387 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-host-proc-sys-kernel\") pod \"cilium-pxv2x\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " pod="kube-system/cilium-pxv2x" Mar 20 21:31:04.618548 kubelet[1803]: I0320 21:31:04.618407 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-hubble-tls\") pod \"cilium-pxv2x\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " pod="kube-system/cilium-pxv2x" Mar 20 21:31:04.618548 kubelet[1803]: I0320 21:31:04.618427 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m74pp\" (UniqueName: \"kubernetes.io/projected/23077ba2-d5b4-4649-8a80-7f59f2f48946-kube-api-access-m74pp\") pod \"kube-proxy-vrpbq\" (UID: \"23077ba2-d5b4-4649-8a80-7f59f2f48946\") " pod="kube-system/kube-proxy-vrpbq" Mar 20 21:31:04.618548 kubelet[1803]: I0320 21:31:04.618452 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-bpf-maps\") pod \"cilium-pxv2x\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " pod="kube-system/cilium-pxv2x" Mar 20 21:31:04.618744 kubelet[1803]: I0320 21:31:04.618475 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-clustermesh-secrets\") pod \"cilium-pxv2x\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " pod="kube-system/cilium-pxv2x" Mar 20 21:31:04.618744 kubelet[1803]: I0320 21:31:04.618496 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h775\" (UniqueName: \"kubernetes.io/projected/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-kube-api-access-8h775\") pod \"cilium-pxv2x\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " pod="kube-system/cilium-pxv2x" Mar 20 21:31:04.618744 kubelet[1803]: I0320 21:31:04.618515 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23077ba2-d5b4-4649-8a80-7f59f2f48946-lib-modules\") pod \"kube-proxy-vrpbq\" (UID: \"23077ba2-d5b4-4649-8a80-7f59f2f48946\") " pod="kube-system/kube-proxy-vrpbq" Mar 20 21:31:04.618744 kubelet[1803]: I0320 21:31:04.618602 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-cilium-run\") pod \"cilium-pxv2x\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " pod="kube-system/cilium-pxv2x" Mar 20 21:31:04.624911 sudo[1669]: pam_unix(sudo:session): session closed for user root Mar 20 21:31:04.626448 sshd[1668]: Connection closed by 10.0.0.1 port 34974 Mar 20 21:31:04.626731 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Mar 20 21:31:04.630146 systemd[1]: sshd@6-10.0.0.139:22-10.0.0.1:34974.service: Deactivated successfully. Mar 20 21:31:04.632575 systemd[1]: session-7.scope: Deactivated successfully. Mar 20 21:31:04.632794 systemd[1]: session-7.scope: Consumed 902ms CPU time, 73.9M memory peak. Mar 20 21:31:04.634159 systemd-logind[1459]: Session 7 logged out. Waiting for processes to exit. Mar 20 21:31:04.635072 systemd-logind[1459]: Removed session 7. Mar 20 21:31:04.673674 kubelet[1803]: E0320 21:31:04.673646 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:04.774455 kubelet[1803]: E0320 21:31:04.774388 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:04.875066 kubelet[1803]: E0320 21:31:04.874893 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:04.908927 containerd[1474]: time="2025-03-20T21:31:04.908874624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vrpbq,Uid:23077ba2-d5b4-4649-8a80-7f59f2f48946,Namespace:kube-system,Attempt:0,}" Mar 20 21:31:04.921730 containerd[1474]: time="2025-03-20T21:31:04.921688319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pxv2x,Uid:0cd73910-5b7f-4ea9-871e-85e6f55f7cb3,Namespace:kube-system,Attempt:0,}" Mar 20 21:31:04.976082 kubelet[1803]: E0320 21:31:04.976033 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:05.076900 kubelet[1803]: E0320 21:31:05.076824 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:05.177474 kubelet[1803]: E0320 21:31:05.177371 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:05.278053 kubelet[1803]: E0320 21:31:05.277980 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:05.378796 kubelet[1803]: E0320 21:31:05.378748 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:05.479865 kubelet[1803]: E0320 21:31:05.479750 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:05.493024 kubelet[1803]: E0320 21:31:05.492960 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:05.552992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3890303153.mount: Deactivated successfully. Mar 20 21:31:05.561889 containerd[1474]: time="2025-03-20T21:31:05.561839012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:31:05.563531 containerd[1474]: time="2025-03-20T21:31:05.563487783Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:31:05.564444 containerd[1474]: time="2025-03-20T21:31:05.564384595Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 20 21:31:05.565236 containerd[1474]: time="2025-03-20T21:31:05.565174716Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 20 21:31:05.566152 containerd[1474]: time="2025-03-20T21:31:05.566107235Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:31:05.568196 containerd[1474]: time="2025-03-20T21:31:05.568161677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:31:05.568982 containerd[1474]: time="2025-03-20T21:31:05.568950185Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 642.327194ms" Mar 20 21:31:05.570126 containerd[1474]: time="2025-03-20T21:31:05.570094020Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 653.830673ms" Mar 20 21:31:05.580901 kubelet[1803]: E0320 21:31:05.580856 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:05.592146 containerd[1474]: time="2025-03-20T21:31:05.592094952Z" level=info msg="connecting to shim bafbb4b33320d4a3426ef5c49572250fe71624b6689bd7f8126f310e8335070f" address="unix:///run/containerd/s/b336944c7be9a826b6baa74cf5352d4f52965676f35f0e7f1dd9f58c444b756c" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:31:05.594920 containerd[1474]: time="2025-03-20T21:31:05.594850669Z" level=info msg="connecting to shim f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08" address="unix:///run/containerd/s/8f8e6ca8472ef2b1e8b46262b98e87e2b215a9f7578dca6eb0bc52a33a1c88d4" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:31:05.623227 systemd[1]: Started cri-containerd-f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08.scope - libcontainer container f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08. Mar 20 21:31:05.627002 systemd[1]: Started cri-containerd-bafbb4b33320d4a3426ef5c49572250fe71624b6689bd7f8126f310e8335070f.scope - libcontainer container bafbb4b33320d4a3426ef5c49572250fe71624b6689bd7f8126f310e8335070f. Mar 20 21:31:05.653753 containerd[1474]: time="2025-03-20T21:31:05.653697166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pxv2x,Uid:0cd73910-5b7f-4ea9-871e-85e6f55f7cb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\"" Mar 20 21:31:05.655865 containerd[1474]: time="2025-03-20T21:31:05.655841717Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 20 21:31:05.660362 containerd[1474]: time="2025-03-20T21:31:05.660339280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vrpbq,Uid:23077ba2-d5b4-4649-8a80-7f59f2f48946,Namespace:kube-system,Attempt:0,} returns sandbox id \"bafbb4b33320d4a3426ef5c49572250fe71624b6689bd7f8126f310e8335070f\"" Mar 20 21:31:05.681595 kubelet[1803]: E0320 21:31:05.681537 1803 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Mar 20 21:31:05.782519 kubelet[1803]: I0320 21:31:05.782428 1803 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 20 21:31:05.782985 containerd[1474]: time="2025-03-20T21:31:05.782842102Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 20 21:31:05.783388 kubelet[1803]: I0320 21:31:05.783202 1803 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 20 21:31:06.493839 kubelet[1803]: E0320 21:31:06.493791 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:07.493964 kubelet[1803]: E0320 21:31:07.493894 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:08.495057 kubelet[1803]: E0320 21:31:08.495003 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:09.495355 kubelet[1803]: E0320 21:31:09.495288 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:10.496129 kubelet[1803]: E0320 21:31:10.496086 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:11.496270 kubelet[1803]: E0320 21:31:11.496217 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:11.525017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3100728505.mount: Deactivated successfully. Mar 20 21:31:12.513870 kubelet[1803]: E0320 21:31:12.513775 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:13.514281 kubelet[1803]: E0320 21:31:13.514239 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:14.126835 containerd[1474]: time="2025-03-20T21:31:14.126779322Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:31:14.127645 containerd[1474]: time="2025-03-20T21:31:14.127565456Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 20 21:31:14.128605 containerd[1474]: time="2025-03-20T21:31:14.128561224Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:31:14.129911 containerd[1474]: time="2025-03-20T21:31:14.129887350Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.474014034s" Mar 20 21:31:14.129945 containerd[1474]: time="2025-03-20T21:31:14.129913118Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 20 21:31:14.130789 containerd[1474]: time="2025-03-20T21:31:14.130771698Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 20 21:31:14.131871 containerd[1474]: time="2025-03-20T21:31:14.131839420Z" level=info msg="CreateContainer within sandbox \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 20 21:31:14.140908 containerd[1474]: time="2025-03-20T21:31:14.140866165Z" level=info msg="Container 7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:31:14.150130 containerd[1474]: time="2025-03-20T21:31:14.150083748Z" level=info msg="CreateContainer within sandbox \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d\"" Mar 20 21:31:14.150666 containerd[1474]: time="2025-03-20T21:31:14.150631095Z" level=info msg="StartContainer for \"7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d\"" Mar 20 21:31:14.151605 containerd[1474]: time="2025-03-20T21:31:14.151556710Z" level=info msg="connecting to shim 7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d" address="unix:///run/containerd/s/8f8e6ca8472ef2b1e8b46262b98e87e2b215a9f7578dca6eb0bc52a33a1c88d4" protocol=ttrpc version=3 Mar 20 21:31:14.268765 systemd[1]: Started cri-containerd-7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d.scope - libcontainer container 7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d. Mar 20 21:31:14.305478 containerd[1474]: time="2025-03-20T21:31:14.305431856Z" level=info msg="StartContainer for \"7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d\" returns successfully" Mar 20 21:31:14.315948 systemd[1]: cri-containerd-7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d.scope: Deactivated successfully. Mar 20 21:31:14.317736 containerd[1474]: time="2025-03-20T21:31:14.317695280Z" level=info msg="received exit event container_id:\"7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d\" id:\"7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d\" pid:1985 exited_at:{seconds:1742506274 nanos:317302594}" Mar 20 21:31:14.317911 containerd[1474]: time="2025-03-20T21:31:14.317791570Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d\" id:\"7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d\" pid:1985 exited_at:{seconds:1742506274 nanos:317302594}" Mar 20 21:31:14.338526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d-rootfs.mount: Deactivated successfully. Mar 20 21:31:14.514499 kubelet[1803]: E0320 21:31:14.514341 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:15.514909 kubelet[1803]: E0320 21:31:15.514860 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:16.392613 containerd[1474]: time="2025-03-20T21:31:16.391370390Z" level=info msg="CreateContainer within sandbox \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 20 21:31:16.454150 containerd[1474]: time="2025-03-20T21:31:16.454099898Z" level=info msg="Container 0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:31:16.466147 containerd[1474]: time="2025-03-20T21:31:16.466114315Z" level=info msg="CreateContainer within sandbox \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc\"" Mar 20 21:31:16.466594 containerd[1474]: time="2025-03-20T21:31:16.466536206Z" level=info msg="StartContainer for \"0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc\"" Mar 20 21:31:16.469123 containerd[1474]: time="2025-03-20T21:31:16.468990057Z" level=info msg="connecting to shim 0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc" address="unix:///run/containerd/s/8f8e6ca8472ef2b1e8b46262b98e87e2b215a9f7578dca6eb0bc52a33a1c88d4" protocol=ttrpc version=3 Mar 20 21:31:16.506839 systemd[1]: Started cri-containerd-0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc.scope - libcontainer container 0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc. Mar 20 21:31:16.515616 kubelet[1803]: E0320 21:31:16.515528 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:16.543724 containerd[1474]: time="2025-03-20T21:31:16.543665163Z" level=info msg="StartContainer for \"0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc\" returns successfully" Mar 20 21:31:16.563471 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 20 21:31:16.563810 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:31:16.565198 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:31:16.567077 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:31:16.569048 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 20 21:31:16.569695 systemd[1]: cri-containerd-0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc.scope: Deactivated successfully. Mar 20 21:31:16.570033 containerd[1474]: time="2025-03-20T21:31:16.569792712Z" level=info msg="received exit event container_id:\"0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc\" id:\"0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc\" pid:2036 exited_at:{seconds:1742506276 nanos:569595202}" Mar 20 21:31:16.570076 containerd[1474]: time="2025-03-20T21:31:16.570025589Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc\" id:\"0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc\" pid:2036 exited_at:{seconds:1742506276 nanos:569595202}" Mar 20 21:31:16.630087 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:31:17.251002 containerd[1474]: time="2025-03-20T21:31:17.250918352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:31:17.251766 containerd[1474]: time="2025-03-20T21:31:17.251696551Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=30354630" Mar 20 21:31:17.253026 containerd[1474]: time="2025-03-20T21:31:17.252989325Z" level=info msg="ImageCreate event name:\"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:31:17.255046 containerd[1474]: time="2025-03-20T21:31:17.255013551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:31:17.255569 containerd[1474]: time="2025-03-20T21:31:17.255526573Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"30353649\" in 3.124660507s" Mar 20 21:31:17.255616 containerd[1474]: time="2025-03-20T21:31:17.255569994Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\"" Mar 20 21:31:17.257336 containerd[1474]: time="2025-03-20T21:31:17.257302703Z" level=info msg="CreateContainer within sandbox \"bafbb4b33320d4a3426ef5c49572250fe71624b6689bd7f8126f310e8335070f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 20 21:31:17.268114 containerd[1474]: time="2025-03-20T21:31:17.268073398Z" level=info msg="Container 2d9e86630c130c1f4a3459a0cd5b5fb709d1d7c9bd8458913d32f3ee47c242c2: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:31:17.277424 containerd[1474]: time="2025-03-20T21:31:17.277359279Z" level=info msg="CreateContainer within sandbox \"bafbb4b33320d4a3426ef5c49572250fe71624b6689bd7f8126f310e8335070f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2d9e86630c130c1f4a3459a0cd5b5fb709d1d7c9bd8458913d32f3ee47c242c2\"" Mar 20 21:31:17.277940 containerd[1474]: time="2025-03-20T21:31:17.277903249Z" level=info msg="StartContainer for \"2d9e86630c130c1f4a3459a0cd5b5fb709d1d7c9bd8458913d32f3ee47c242c2\"" Mar 20 21:31:17.279272 containerd[1474]: time="2025-03-20T21:31:17.279242901Z" level=info msg="connecting to shim 2d9e86630c130c1f4a3459a0cd5b5fb709d1d7c9bd8458913d32f3ee47c242c2" address="unix:///run/containerd/s/b336944c7be9a826b6baa74cf5352d4f52965676f35f0e7f1dd9f58c444b756c" protocol=ttrpc version=3 Mar 20 21:31:17.382244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc-rootfs.mount: Deactivated successfully. Mar 20 21:31:17.382366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3448767075.mount: Deactivated successfully. Mar 20 21:31:17.399248 systemd[1]: Started cri-containerd-2d9e86630c130c1f4a3459a0cd5b5fb709d1d7c9bd8458913d32f3ee47c242c2.scope - libcontainer container 2d9e86630c130c1f4a3459a0cd5b5fb709d1d7c9bd8458913d32f3ee47c242c2. Mar 20 21:31:17.401863 containerd[1474]: time="2025-03-20T21:31:17.401780208Z" level=info msg="CreateContainer within sandbox \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 20 21:31:17.515933 kubelet[1803]: E0320 21:31:17.515821 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:17.542785 containerd[1474]: time="2025-03-20T21:31:17.542729228Z" level=info msg="StartContainer for \"2d9e86630c130c1f4a3459a0cd5b5fb709d1d7c9bd8458913d32f3ee47c242c2\" returns successfully" Mar 20 21:31:17.574879 containerd[1474]: time="2025-03-20T21:31:17.574804509Z" level=info msg="Container ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:31:17.727107 containerd[1474]: time="2025-03-20T21:31:17.727068333Z" level=info msg="CreateContainer within sandbox \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346\"" Mar 20 21:31:17.727799 containerd[1474]: time="2025-03-20T21:31:17.727775119Z" level=info msg="StartContainer for \"ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346\"" Mar 20 21:31:17.729551 containerd[1474]: time="2025-03-20T21:31:17.729516975Z" level=info msg="connecting to shim ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346" address="unix:///run/containerd/s/8f8e6ca8472ef2b1e8b46262b98e87e2b215a9f7578dca6eb0bc52a33a1c88d4" protocol=ttrpc version=3 Mar 20 21:31:17.756960 systemd[1]: Started cri-containerd-ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346.scope - libcontainer container ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346. Mar 20 21:31:17.892322 systemd[1]: cri-containerd-ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346.scope: Deactivated successfully. Mar 20 21:31:17.894424 containerd[1474]: time="2025-03-20T21:31:17.894391835Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346\" id:\"ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346\" pid:2176 exited_at:{seconds:1742506277 nanos:894008787}" Mar 20 21:31:17.914448 containerd[1474]: time="2025-03-20T21:31:17.914386164Z" level=info msg="received exit event container_id:\"ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346\" id:\"ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346\" pid:2176 exited_at:{seconds:1742506277 nanos:894008787}" Mar 20 21:31:17.916325 containerd[1474]: time="2025-03-20T21:31:17.916280857Z" level=info msg="StartContainer for \"ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346\" returns successfully" Mar 20 21:31:17.981901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346-rootfs.mount: Deactivated successfully. Mar 20 21:31:18.514424 kubelet[1803]: I0320 21:31:18.514350 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vrpbq" podStartSLOduration=2.9191104450000003 podStartE2EDuration="14.514333195s" podCreationTimestamp="2025-03-20 21:31:04 +0000 UTC" firstStartedPulling="2025-03-20 21:31:05.661019054 +0000 UTC m=+2.536239589" lastFinishedPulling="2025-03-20 21:31:17.256241804 +0000 UTC m=+14.131462339" observedRunningTime="2025-03-20 21:31:18.457995141 +0000 UTC m=+15.333215676" watchObservedRunningTime="2025-03-20 21:31:18.514333195 +0000 UTC m=+15.389553730" Mar 20 21:31:18.516252 kubelet[1803]: E0320 21:31:18.516220 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:19.412630 containerd[1474]: time="2025-03-20T21:31:19.412571081Z" level=info msg="CreateContainer within sandbox \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 20 21:31:19.517316 kubelet[1803]: E0320 21:31:19.517278 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:19.617051 containerd[1474]: time="2025-03-20T21:31:19.616992190Z" level=info msg="Container bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:31:19.620976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2859333124.mount: Deactivated successfully. Mar 20 21:31:19.759781 containerd[1474]: time="2025-03-20T21:31:19.759629425Z" level=info msg="CreateContainer within sandbox \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79\"" Mar 20 21:31:19.760398 containerd[1474]: time="2025-03-20T21:31:19.760326472Z" level=info msg="StartContainer for \"bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79\"" Mar 20 21:31:19.761334 containerd[1474]: time="2025-03-20T21:31:19.761306349Z" level=info msg="connecting to shim bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79" address="unix:///run/containerd/s/8f8e6ca8472ef2b1e8b46262b98e87e2b215a9f7578dca6eb0bc52a33a1c88d4" protocol=ttrpc version=3 Mar 20 21:31:19.806735 systemd[1]: Started cri-containerd-bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79.scope - libcontainer container bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79. Mar 20 21:31:19.835916 systemd[1]: cri-containerd-bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79.scope: Deactivated successfully. Mar 20 21:31:19.836400 containerd[1474]: time="2025-03-20T21:31:19.836366747Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79\" id:\"bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79\" pid:2291 exited_at:{seconds:1742506279 nanos:836133971}" Mar 20 21:31:19.871704 containerd[1474]: time="2025-03-20T21:31:19.871653450Z" level=info msg="received exit event container_id:\"bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79\" id:\"bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79\" pid:2291 exited_at:{seconds:1742506279 nanos:836133971}" Mar 20 21:31:19.884536 containerd[1474]: time="2025-03-20T21:31:19.884490529Z" level=info msg="StartContainer for \"bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79\" returns successfully" Mar 20 21:31:19.897175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79-rootfs.mount: Deactivated successfully. Mar 20 21:31:20.416515 containerd[1474]: time="2025-03-20T21:31:20.416461816Z" level=info msg="CreateContainer within sandbox \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 20 21:31:20.426899 containerd[1474]: time="2025-03-20T21:31:20.426849182Z" level=info msg="Container 213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:31:20.435689 containerd[1474]: time="2025-03-20T21:31:20.435657537Z" level=info msg="CreateContainer within sandbox \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d\"" Mar 20 21:31:20.436096 containerd[1474]: time="2025-03-20T21:31:20.436062066Z" level=info msg="StartContainer for \"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d\"" Mar 20 21:31:20.437085 containerd[1474]: time="2025-03-20T21:31:20.437050189Z" level=info msg="connecting to shim 213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d" address="unix:///run/containerd/s/8f8e6ca8472ef2b1e8b46262b98e87e2b215a9f7578dca6eb0bc52a33a1c88d4" protocol=ttrpc version=3 Mar 20 21:31:20.454742 systemd[1]: Started cri-containerd-213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d.scope - libcontainer container 213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d. Mar 20 21:31:20.489394 containerd[1474]: time="2025-03-20T21:31:20.489354239Z" level=info msg="StartContainer for \"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d\" returns successfully" Mar 20 21:31:20.518464 kubelet[1803]: E0320 21:31:20.518402 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:20.599248 containerd[1474]: time="2025-03-20T21:31:20.599204127Z" level=info msg="TaskExit event in podsandbox handler container_id:\"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d\" id:\"a885bb6150ad70e54d097a096217ead43790b8f5eff1da6fb535ee6e62aa3945\" pid:2358 exited_at:{seconds:1742506280 nanos:598928390}" Mar 20 21:31:20.619349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount601421213.mount: Deactivated successfully. Mar 20 21:31:20.627701 kubelet[1803]: I0320 21:31:20.627664 1803 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 20 21:31:20.977606 kernel: Initializing XFRM netlink socket Mar 20 21:31:21.434137 kubelet[1803]: I0320 21:31:21.433976 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pxv2x" podStartSLOduration=8.958751661 podStartE2EDuration="17.433956177s" podCreationTimestamp="2025-03-20 21:31:04 +0000 UTC" firstStartedPulling="2025-03-20 21:31:05.655467956 +0000 UTC m=+2.530688491" lastFinishedPulling="2025-03-20 21:31:14.130672472 +0000 UTC m=+11.005893007" observedRunningTime="2025-03-20 21:31:21.433642128 +0000 UTC m=+18.308862673" watchObservedRunningTime="2025-03-20 21:31:21.433956177 +0000 UTC m=+18.309176712" Mar 20 21:31:21.519058 kubelet[1803]: E0320 21:31:21.518983 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:22.519670 kubelet[1803]: E0320 21:31:22.519628 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:22.713603 systemd-networkd[1415]: cilium_host: Link UP Mar 20 21:31:22.713756 systemd-networkd[1415]: cilium_net: Link UP Mar 20 21:31:22.717763 systemd-networkd[1415]: cilium_net: Gained carrier Mar 20 21:31:22.718501 systemd-networkd[1415]: cilium_host: Gained carrier Mar 20 21:31:22.718672 systemd-networkd[1415]: cilium_net: Gained IPv6LL Mar 20 21:31:22.718835 systemd-networkd[1415]: cilium_host: Gained IPv6LL Mar 20 21:31:22.820470 systemd-networkd[1415]: cilium_vxlan: Link UP Mar 20 21:31:22.820483 systemd-networkd[1415]: cilium_vxlan: Gained carrier Mar 20 21:31:23.044615 kernel: NET: Registered PF_ALG protocol family Mar 20 21:31:23.492195 kubelet[1803]: E0320 21:31:23.492136 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:23.519946 kubelet[1803]: E0320 21:31:23.519883 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:23.692309 systemd-networkd[1415]: lxc_health: Link UP Mar 20 21:31:23.696777 systemd-networkd[1415]: lxc_health: Gained carrier Mar 20 21:31:24.323753 systemd-networkd[1415]: cilium_vxlan: Gained IPv6LL Mar 20 21:31:24.520429 kubelet[1803]: E0320 21:31:24.520387 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:25.219847 systemd-networkd[1415]: lxc_health: Gained IPv6LL Mar 20 21:31:25.521023 kubelet[1803]: E0320 21:31:25.520873 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:25.955644 systemd[1]: Created slice kubepods-besteffort-pod393352ae_82d8_4f4c_88a1_26f85054da83.slice - libcontainer container kubepods-besteffort-pod393352ae_82d8_4f4c_88a1_26f85054da83.slice. Mar 20 21:31:26.044119 kubelet[1803]: I0320 21:31:26.044062 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bffrc\" (UniqueName: \"kubernetes.io/projected/393352ae-82d8-4f4c-88a1-26f85054da83-kube-api-access-bffrc\") pod \"nginx-deployment-8587fbcb89-mkcrf\" (UID: \"393352ae-82d8-4f4c-88a1-26f85054da83\") " pod="default/nginx-deployment-8587fbcb89-mkcrf" Mar 20 21:31:26.259474 containerd[1474]: time="2025-03-20T21:31:26.259132662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mkcrf,Uid:393352ae-82d8-4f4c-88a1-26f85054da83,Namespace:default,Attempt:0,}" Mar 20 21:31:26.360290 systemd-networkd[1415]: lxcf5882eda9736: Link UP Mar 20 21:31:26.361646 kernel: eth0: renamed from tmpbaae2 Mar 20 21:31:26.364210 systemd-networkd[1415]: lxcf5882eda9736: Gained carrier Mar 20 21:31:26.521621 kubelet[1803]: E0320 21:31:26.521438 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:27.522493 kubelet[1803]: E0320 21:31:27.522440 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:27.971828 systemd-networkd[1415]: lxcf5882eda9736: Gained IPv6LL Mar 20 21:31:28.418572 containerd[1474]: time="2025-03-20T21:31:28.418331866Z" level=info msg="connecting to shim baae2bfa727e69da5b69523779c8419b8d8a7d16189df5b977945578d9b75923" address="unix:///run/containerd/s/e90e096ecdde89b07c0e8d369c81f272acda58350477e2986634e2ee574fa659" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:31:28.448714 systemd[1]: Started cri-containerd-baae2bfa727e69da5b69523779c8419b8d8a7d16189df5b977945578d9b75923.scope - libcontainer container baae2bfa727e69da5b69523779c8419b8d8a7d16189df5b977945578d9b75923. Mar 20 21:31:28.462597 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:31:28.499245 containerd[1474]: time="2025-03-20T21:31:28.499199539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mkcrf,Uid:393352ae-82d8-4f4c-88a1-26f85054da83,Namespace:default,Attempt:0,} returns sandbox id \"baae2bfa727e69da5b69523779c8419b8d8a7d16189df5b977945578d9b75923\"" Mar 20 21:31:28.500573 containerd[1474]: time="2025-03-20T21:31:28.500523656Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 20 21:31:28.523160 kubelet[1803]: E0320 21:31:28.523098 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:29.524001 kubelet[1803]: E0320 21:31:29.523950 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:30.524639 kubelet[1803]: E0320 21:31:30.524561 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:31.527460 kubelet[1803]: E0320 21:31:31.527410 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:31.664259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3112963399.mount: Deactivated successfully. Mar 20 21:31:32.527932 kubelet[1803]: E0320 21:31:32.527859 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:33.285839 containerd[1474]: time="2025-03-20T21:31:33.285780442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:31:33.286570 containerd[1474]: time="2025-03-20T21:31:33.286510767Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73060131" Mar 20 21:31:33.287608 containerd[1474]: time="2025-03-20T21:31:33.287564300Z" level=info msg="ImageCreate event name:\"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:31:33.290316 containerd[1474]: time="2025-03-20T21:31:33.290284436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:31:33.291267 containerd[1474]: time="2025-03-20T21:31:33.291223479Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 4.790662953s" Mar 20 21:31:33.291267 containerd[1474]: time="2025-03-20T21:31:33.291266682Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 20 21:31:33.293197 containerd[1474]: time="2025-03-20T21:31:33.293160599Z" level=info msg="CreateContainer within sandbox \"baae2bfa727e69da5b69523779c8419b8d8a7d16189df5b977945578d9b75923\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 20 21:31:33.301693 containerd[1474]: time="2025-03-20T21:31:33.301657060Z" level=info msg="Container 646bea1b57fdb41a0c1121b2448a53072d6ed234917628f493166f6d2d66d015: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:31:33.308076 containerd[1474]: time="2025-03-20T21:31:33.308030573Z" level=info msg="CreateContainer within sandbox \"baae2bfa727e69da5b69523779c8419b8d8a7d16189df5b977945578d9b75923\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"646bea1b57fdb41a0c1121b2448a53072d6ed234917628f493166f6d2d66d015\"" Mar 20 21:31:33.308446 containerd[1474]: time="2025-03-20T21:31:33.308414688Z" level=info msg="StartContainer for \"646bea1b57fdb41a0c1121b2448a53072d6ed234917628f493166f6d2d66d015\"" Mar 20 21:31:33.309588 containerd[1474]: time="2025-03-20T21:31:33.309558432Z" level=info msg="connecting to shim 646bea1b57fdb41a0c1121b2448a53072d6ed234917628f493166f6d2d66d015" address="unix:///run/containerd/s/e90e096ecdde89b07c0e8d369c81f272acda58350477e2986634e2ee574fa659" protocol=ttrpc version=3 Mar 20 21:31:33.398883 systemd[1]: Started cri-containerd-646bea1b57fdb41a0c1121b2448a53072d6ed234917628f493166f6d2d66d015.scope - libcontainer container 646bea1b57fdb41a0c1121b2448a53072d6ed234917628f493166f6d2d66d015. Mar 20 21:31:33.449667 containerd[1474]: time="2025-03-20T21:31:33.449626013Z" level=info msg="StartContainer for \"646bea1b57fdb41a0c1121b2448a53072d6ed234917628f493166f6d2d66d015\" returns successfully" Mar 20 21:31:33.528506 kubelet[1803]: E0320 21:31:33.528459 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:34.472292 kubelet[1803]: I0320 21:31:34.472220 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-mkcrf" podStartSLOduration=4.680482137 podStartE2EDuration="9.472205259s" podCreationTimestamp="2025-03-20 21:31:25 +0000 UTC" firstStartedPulling="2025-03-20 21:31:28.500275098 +0000 UTC m=+25.375495633" lastFinishedPulling="2025-03-20 21:31:33.29199822 +0000 UTC m=+30.167218755" observedRunningTime="2025-03-20 21:31:34.471898784 +0000 UTC m=+31.347119339" watchObservedRunningTime="2025-03-20 21:31:34.472205259 +0000 UTC m=+31.347425794" Mar 20 21:31:34.528973 kubelet[1803]: E0320 21:31:34.528937 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:35.530077 kubelet[1803]: E0320 21:31:35.530015 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:36.530597 kubelet[1803]: E0320 21:31:36.530515 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:37.531426 kubelet[1803]: E0320 21:31:37.531356 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:38.169370 systemd[1]: Created slice kubepods-besteffort-pod343f9464_ac7d_4364_90e2_e3b3f8860eaa.slice - libcontainer container kubepods-besteffort-pod343f9464_ac7d_4364_90e2_e3b3f8860eaa.slice. Mar 20 21:31:38.315058 kubelet[1803]: I0320 21:31:38.315013 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/343f9464-ac7d-4364-90e2-e3b3f8860eaa-data\") pod \"nfs-server-provisioner-0\" (UID: \"343f9464-ac7d-4364-90e2-e3b3f8860eaa\") " pod="default/nfs-server-provisioner-0" Mar 20 21:31:38.315058 kubelet[1803]: I0320 21:31:38.315059 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rnpc\" (UniqueName: \"kubernetes.io/projected/343f9464-ac7d-4364-90e2-e3b3f8860eaa-kube-api-access-9rnpc\") pod \"nfs-server-provisioner-0\" (UID: \"343f9464-ac7d-4364-90e2-e3b3f8860eaa\") " pod="default/nfs-server-provisioner-0" Mar 20 21:31:38.473256 containerd[1474]: time="2025-03-20T21:31:38.472552187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:343f9464-ac7d-4364-90e2-e3b3f8860eaa,Namespace:default,Attempt:0,}" Mar 20 21:31:38.531738 kubelet[1803]: E0320 21:31:38.531689 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:38.840616 kernel: eth0: renamed from tmp3ca45 Mar 20 21:31:38.847467 systemd-networkd[1415]: lxc98aa64102432: Link UP Mar 20 21:31:38.848201 systemd-networkd[1415]: lxc98aa64102432: Gained carrier Mar 20 21:31:39.047434 containerd[1474]: time="2025-03-20T21:31:39.047371944Z" level=info msg="connecting to shim 3ca45496260d431a7a0196532d2607e78e97cf157eb2d4001e80ebbe0a311d1a" address="unix:///run/containerd/s/86ca7d38e70737496003c43b10b17f7359ce940b630680c86b45f32144fff80c" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:31:39.077852 systemd[1]: Started cri-containerd-3ca45496260d431a7a0196532d2607e78e97cf157eb2d4001e80ebbe0a311d1a.scope - libcontainer container 3ca45496260d431a7a0196532d2607e78e97cf157eb2d4001e80ebbe0a311d1a. Mar 20 21:31:39.090903 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:31:39.120950 containerd[1474]: time="2025-03-20T21:31:39.120813119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:343f9464-ac7d-4364-90e2-e3b3f8860eaa,Namespace:default,Attempt:0,} returns sandbox id \"3ca45496260d431a7a0196532d2607e78e97cf157eb2d4001e80ebbe0a311d1a\"" Mar 20 21:31:39.122324 containerd[1474]: time="2025-03-20T21:31:39.122292408Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 20 21:31:39.532734 kubelet[1803]: E0320 21:31:39.532644 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:40.003789 systemd-networkd[1415]: lxc98aa64102432: Gained IPv6LL Mar 20 21:31:40.316779 update_engine[1461]: I20250320 21:31:40.316624 1461 update_attempter.cc:509] Updating boot flags... Mar 20 21:31:40.342080 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2963) Mar 20 21:31:40.385616 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2963) Mar 20 21:31:40.430614 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2963) Mar 20 21:31:40.533324 kubelet[1803]: E0320 21:31:40.533272 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:41.083013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2670191613.mount: Deactivated successfully. Mar 20 21:31:41.533910 kubelet[1803]: E0320 21:31:41.533784 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:42.534699 kubelet[1803]: E0320 21:31:42.534648 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:43.238401 containerd[1474]: time="2025-03-20T21:31:43.238340513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:31:43.239128 containerd[1474]: time="2025-03-20T21:31:43.239094010Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Mar 20 21:31:43.240425 containerd[1474]: time="2025-03-20T21:31:43.240380907Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:31:43.243048 containerd[1474]: time="2025-03-20T21:31:43.243018270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:31:43.244133 containerd[1474]: time="2025-03-20T21:31:43.244078748Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.121748519s" Mar 20 21:31:43.244133 containerd[1474]: time="2025-03-20T21:31:43.244114426Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Mar 20 21:31:43.245823 containerd[1474]: time="2025-03-20T21:31:43.245797874Z" level=info msg="CreateContainer within sandbox \"3ca45496260d431a7a0196532d2607e78e97cf157eb2d4001e80ebbe0a311d1a\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 20 21:31:43.252555 containerd[1474]: time="2025-03-20T21:31:43.252520623Z" level=info msg="Container 103797c50920f0c7dbad7d06d019082bbeceda9a03ce09dd287049d06399338e: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:31:43.261484 containerd[1474]: time="2025-03-20T21:31:43.261443347Z" level=info msg="CreateContainer within sandbox \"3ca45496260d431a7a0196532d2607e78e97cf157eb2d4001e80ebbe0a311d1a\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"103797c50920f0c7dbad7d06d019082bbeceda9a03ce09dd287049d06399338e\"" Mar 20 21:31:43.261962 containerd[1474]: time="2025-03-20T21:31:43.261923747Z" level=info msg="StartContainer for \"103797c50920f0c7dbad7d06d019082bbeceda9a03ce09dd287049d06399338e\"" Mar 20 21:31:43.262774 containerd[1474]: time="2025-03-20T21:31:43.262742457Z" level=info msg="connecting to shim 103797c50920f0c7dbad7d06d019082bbeceda9a03ce09dd287049d06399338e" address="unix:///run/containerd/s/86ca7d38e70737496003c43b10b17f7359ce940b630680c86b45f32144fff80c" protocol=ttrpc version=3 Mar 20 21:31:43.305705 systemd[1]: Started cri-containerd-103797c50920f0c7dbad7d06d019082bbeceda9a03ce09dd287049d06399338e.scope - libcontainer container 103797c50920f0c7dbad7d06d019082bbeceda9a03ce09dd287049d06399338e. Mar 20 21:31:43.337832 containerd[1474]: time="2025-03-20T21:31:43.337786007Z" level=info msg="StartContainer for \"103797c50920f0c7dbad7d06d019082bbeceda9a03ce09dd287049d06399338e\" returns successfully" Mar 20 21:31:43.471932 kubelet[1803]: I0320 21:31:43.471881 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.349215041 podStartE2EDuration="5.471865679s" podCreationTimestamp="2025-03-20 21:31:38 +0000 UTC" firstStartedPulling="2025-03-20 21:31:39.122001346 +0000 UTC m=+35.997221881" lastFinishedPulling="2025-03-20 21:31:43.244651984 +0000 UTC m=+40.119872519" observedRunningTime="2025-03-20 21:31:43.471576382 +0000 UTC m=+40.346796917" watchObservedRunningTime="2025-03-20 21:31:43.471865679 +0000 UTC m=+40.347086214" Mar 20 21:31:43.492066 kubelet[1803]: E0320 21:31:43.491974 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:43.535655 kubelet[1803]: E0320 21:31:43.535619 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:44.535953 kubelet[1803]: E0320 21:31:44.535916 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:45.536340 kubelet[1803]: E0320 21:31:45.536300 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:46.537182 kubelet[1803]: E0320 21:31:46.537120 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:47.537878 kubelet[1803]: E0320 21:31:47.537823 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:48.538052 kubelet[1803]: E0320 21:31:48.537979 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:49.538818 kubelet[1803]: E0320 21:31:49.538692 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:50.539718 kubelet[1803]: E0320 21:31:50.539664 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:51.540460 kubelet[1803]: E0320 21:31:51.540409 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:52.541473 kubelet[1803]: E0320 21:31:52.541418 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:52.616716 systemd[1]: Created slice kubepods-besteffort-pod3154e163_9603_4ab4_b32a_2c057f2eb93d.slice - libcontainer container kubepods-besteffort-pod3154e163_9603_4ab4_b32a_2c057f2eb93d.slice. Mar 20 21:31:52.785048 kubelet[1803]: I0320 21:31:52.785001 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0a3383ff-6ae3-4aee-b2d1-119488b39677\" (UniqueName: \"kubernetes.io/nfs/3154e163-9603-4ab4-b32a-2c057f2eb93d-pvc-0a3383ff-6ae3-4aee-b2d1-119488b39677\") pod \"test-pod-1\" (UID: \"3154e163-9603-4ab4-b32a-2c057f2eb93d\") " pod="default/test-pod-1" Mar 20 21:31:52.785048 kubelet[1803]: I0320 21:31:52.785040 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvvl5\" (UniqueName: \"kubernetes.io/projected/3154e163-9603-4ab4-b32a-2c057f2eb93d-kube-api-access-tvvl5\") pod \"test-pod-1\" (UID: \"3154e163-9603-4ab4-b32a-2c057f2eb93d\") " pod="default/test-pod-1" Mar 20 21:31:52.914772 kernel: FS-Cache: Loaded Mar 20 21:31:52.981029 kernel: RPC: Registered named UNIX socket transport module. Mar 20 21:31:52.981134 kernel: RPC: Registered udp transport module. Mar 20 21:31:52.981162 kernel: RPC: Registered tcp transport module. Mar 20 21:31:52.981184 kernel: RPC: Registered tcp-with-tls transport module. Mar 20 21:31:52.981744 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 20 21:31:53.195736 kernel: NFS: Registering the id_resolver key type Mar 20 21:31:53.195878 kernel: Key type id_resolver registered Mar 20 21:31:53.195906 kernel: Key type id_legacy registered Mar 20 21:31:53.220057 nfsidmap[3138]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 20 21:31:53.222353 nfsidmap[3139]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 20 21:31:53.519481 containerd[1474]: time="2025-03-20T21:31:53.519364762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3154e163-9603-4ab4-b32a-2c057f2eb93d,Namespace:default,Attempt:0,}" Mar 20 21:31:53.541579 kubelet[1803]: E0320 21:31:53.541545 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:53.542455 systemd-networkd[1415]: lxc577346984c31: Link UP Mar 20 21:31:53.543623 kernel: eth0: renamed from tmp926de Mar 20 21:31:53.548166 systemd-networkd[1415]: lxc577346984c31: Gained carrier Mar 20 21:31:53.748883 containerd[1474]: time="2025-03-20T21:31:53.748839586Z" level=info msg="connecting to shim 926de17e0aa66c61461c7846d801610e7dda06b519fb109e3f8b19b2515e7fa0" address="unix:///run/containerd/s/309188a427cb4b264dc93b2a5be10ad8cc72a1cfb212487bcd0bc908ac4a465e" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:31:53.772710 systemd[1]: Started cri-containerd-926de17e0aa66c61461c7846d801610e7dda06b519fb109e3f8b19b2515e7fa0.scope - libcontainer container 926de17e0aa66c61461c7846d801610e7dda06b519fb109e3f8b19b2515e7fa0. Mar 20 21:31:53.784190 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:31:53.811785 containerd[1474]: time="2025-03-20T21:31:53.811750703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3154e163-9603-4ab4-b32a-2c057f2eb93d,Namespace:default,Attempt:0,} returns sandbox id \"926de17e0aa66c61461c7846d801610e7dda06b519fb109e3f8b19b2515e7fa0\"" Mar 20 21:31:53.812942 containerd[1474]: time="2025-03-20T21:31:53.812920488Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 20 21:31:54.204297 containerd[1474]: time="2025-03-20T21:31:54.204242561Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:31:54.204969 containerd[1474]: time="2025-03-20T21:31:54.204887535Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Mar 20 21:31:54.206972 containerd[1474]: time="2025-03-20T21:31:54.206937698Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 393.992123ms" Mar 20 21:31:54.206972 containerd[1474]: time="2025-03-20T21:31:54.206966362Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 20 21:31:54.208780 containerd[1474]: time="2025-03-20T21:31:54.208753419Z" level=info msg="CreateContainer within sandbox \"926de17e0aa66c61461c7846d801610e7dda06b519fb109e3f8b19b2515e7fa0\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 20 21:31:54.216734 containerd[1474]: time="2025-03-20T21:31:54.216683831Z" level=info msg="Container 6674874b966b746b6173151222fbb32922ee5302a477bb4384b023979b0083ab: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:31:54.223546 containerd[1474]: time="2025-03-20T21:31:54.223514330Z" level=info msg="CreateContainer within sandbox \"926de17e0aa66c61461c7846d801610e7dda06b519fb109e3f8b19b2515e7fa0\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"6674874b966b746b6173151222fbb32922ee5302a477bb4384b023979b0083ab\"" Mar 20 21:31:54.224028 containerd[1474]: time="2025-03-20T21:31:54.223986710Z" level=info msg="StartContainer for \"6674874b966b746b6173151222fbb32922ee5302a477bb4384b023979b0083ab\"" Mar 20 21:31:54.224879 containerd[1474]: time="2025-03-20T21:31:54.224854044Z" level=info msg="connecting to shim 6674874b966b746b6173151222fbb32922ee5302a477bb4384b023979b0083ab" address="unix:///run/containerd/s/309188a427cb4b264dc93b2a5be10ad8cc72a1cfb212487bcd0bc908ac4a465e" protocol=ttrpc version=3 Mar 20 21:31:54.247713 systemd[1]: Started cri-containerd-6674874b966b746b6173151222fbb32922ee5302a477bb4384b023979b0083ab.scope - libcontainer container 6674874b966b746b6173151222fbb32922ee5302a477bb4384b023979b0083ab. Mar 20 21:31:54.274830 containerd[1474]: time="2025-03-20T21:31:54.274796248Z" level=info msg="StartContainer for \"6674874b966b746b6173151222fbb32922ee5302a477bb4384b023979b0083ab\" returns successfully" Mar 20 21:31:54.487125 kubelet[1803]: I0320 21:31:54.487001 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.092096053 podStartE2EDuration="16.486985978s" podCreationTimestamp="2025-03-20 21:31:38 +0000 UTC" firstStartedPulling="2025-03-20 21:31:53.812715211 +0000 UTC m=+50.687935746" lastFinishedPulling="2025-03-20 21:31:54.207605146 +0000 UTC m=+51.082825671" observedRunningTime="2025-03-20 21:31:54.486927758 +0000 UTC m=+51.362148293" watchObservedRunningTime="2025-03-20 21:31:54.486985978 +0000 UTC m=+51.362206513" Mar 20 21:31:54.542607 kubelet[1803]: E0320 21:31:54.542544 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:55.543685 kubelet[1803]: E0320 21:31:55.543629 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:55.555745 systemd-networkd[1415]: lxc577346984c31: Gained IPv6LL Mar 20 21:31:56.544371 kubelet[1803]: E0320 21:31:56.544319 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:57.544498 kubelet[1803]: E0320 21:31:57.544444 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:58.545640 kubelet[1803]: E0320 21:31:58.545563 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:31:59.546454 kubelet[1803]: E0320 21:31:59.546402 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:32:00.547485 kubelet[1803]: E0320 21:32:00.547430 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:32:01.125684 containerd[1474]: time="2025-03-20T21:32:01.125634471Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 21:32:01.131487 containerd[1474]: time="2025-03-20T21:32:01.131435992Z" level=info msg="TaskExit event in podsandbox handler container_id:\"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d\" id:\"dd5538ce1009365ed0ff0cf0e49282472eb4aed3f13c93ea23c8e629c85c7702\" pid:3273 exited_at:{seconds:1742506321 nanos:131119005}" Mar 20 21:32:01.133249 containerd[1474]: time="2025-03-20T21:32:01.133213094Z" level=info msg="StopContainer for \"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d\" with timeout 2 (s)" Mar 20 21:32:01.133498 containerd[1474]: time="2025-03-20T21:32:01.133467072Z" level=info msg="Stop container \"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d\" with signal terminated" Mar 20 21:32:01.139877 systemd-networkd[1415]: lxc_health: Link DOWN Mar 20 21:32:01.139885 systemd-networkd[1415]: lxc_health: Lost carrier Mar 20 21:32:01.159061 systemd[1]: cri-containerd-213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d.scope: Deactivated successfully. Mar 20 21:32:01.159755 systemd[1]: cri-containerd-213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d.scope: Consumed 7.340s CPU time, 121.6M memory peak, 144K read from disk, 13.3M written to disk. Mar 20 21:32:01.160130 containerd[1474]: time="2025-03-20T21:32:01.159764201Z" level=info msg="received exit event container_id:\"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d\" id:\"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d\" pid:2328 exited_at:{seconds:1742506321 nanos:159485035}" Mar 20 21:32:01.160130 containerd[1474]: time="2025-03-20T21:32:01.159911057Z" level=info msg="TaskExit event in podsandbox handler container_id:\"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d\" id:\"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d\" pid:2328 exited_at:{seconds:1742506321 nanos:159485035}" Mar 20 21:32:01.179143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d-rootfs.mount: Deactivated successfully. Mar 20 21:32:01.193970 containerd[1474]: time="2025-03-20T21:32:01.193925760Z" level=info msg="StopContainer for \"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d\" returns successfully" Mar 20 21:32:01.194738 containerd[1474]: time="2025-03-20T21:32:01.194690559Z" level=info msg="StopPodSandbox for \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\"" Mar 20 21:32:01.194815 containerd[1474]: time="2025-03-20T21:32:01.194764378Z" level=info msg="Container to stop \"7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:32:01.194815 containerd[1474]: time="2025-03-20T21:32:01.194777813Z" level=info msg="Container to stop \"ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:32:01.194815 containerd[1474]: time="2025-03-20T21:32:01.194786619Z" level=info msg="Container to stop \"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:32:01.194815 containerd[1474]: time="2025-03-20T21:32:01.194795467Z" level=info msg="Container to stop \"0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:32:01.194815 containerd[1474]: time="2025-03-20T21:32:01.194804043Z" level=info msg="Container to stop \"bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:32:01.201330 systemd[1]: cri-containerd-f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08.scope: Deactivated successfully. Mar 20 21:32:01.204038 containerd[1474]: time="2025-03-20T21:32:01.203992615Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\" id:\"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\" pid:1918 exit_status:137 exited_at:{seconds:1742506321 nanos:203752592}" Mar 20 21:32:01.226894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08-rootfs.mount: Deactivated successfully. Mar 20 21:32:01.230167 containerd[1474]: time="2025-03-20T21:32:01.230109655Z" level=info msg="shim disconnected" id=f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08 namespace=k8s.io Mar 20 21:32:01.230167 containerd[1474]: time="2025-03-20T21:32:01.230145041Z" level=warning msg="cleaning up after shim disconnected" id=f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08 namespace=k8s.io Mar 20 21:32:01.230167 containerd[1474]: time="2025-03-20T21:32:01.230155190Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 20 21:32:01.243638 containerd[1474]: time="2025-03-20T21:32:01.243591711Z" level=info msg="received exit event sandbox_id:\"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\" exit_status:137 exited_at:{seconds:1742506321 nanos:203752592}" Mar 20 21:32:01.243940 containerd[1474]: time="2025-03-20T21:32:01.243800905Z" level=info msg="TearDown network for sandbox \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\" successfully" Mar 20 21:32:01.243940 containerd[1474]: time="2025-03-20T21:32:01.243829037Z" level=info msg="StopPodSandbox for \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\" returns successfully" Mar 20 21:32:01.245647 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08-shm.mount: Deactivated successfully. Mar 20 21:32:01.434338 kubelet[1803]: I0320 21:32:01.434176 1803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-lib-modules\") pod \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " Mar 20 21:32:01.434338 kubelet[1803]: I0320 21:32:01.434220 1803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-hostproc\") pod \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " Mar 20 21:32:01.434338 kubelet[1803]: I0320 21:32:01.434240 1803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-host-proc-sys-kernel\") pod \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " Mar 20 21:32:01.434338 kubelet[1803]: I0320 21:32:01.434267 1803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-clustermesh-secrets\") pod \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " Mar 20 21:32:01.434338 kubelet[1803]: I0320 21:32:01.434281 1803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-cilium-run\") pod \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " Mar 20 21:32:01.434338 kubelet[1803]: I0320 21:32:01.434296 1803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-cni-path\") pod \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " Mar 20 21:32:01.434651 kubelet[1803]: I0320 21:32:01.434311 1803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-host-proc-sys-net\") pod \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " Mar 20 21:32:01.434651 kubelet[1803]: I0320 21:32:01.434327 1803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h775\" (UniqueName: \"kubernetes.io/projected/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-kube-api-access-8h775\") pod \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " Mar 20 21:32:01.434651 kubelet[1803]: I0320 21:32:01.434343 1803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-xtables-lock\") pod \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " Mar 20 21:32:01.434651 kubelet[1803]: I0320 21:32:01.434356 1803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-etc-cni-netd\") pod \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " Mar 20 21:32:01.434651 kubelet[1803]: I0320 21:32:01.434345 1803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" (UID: "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:32:01.435768 kubelet[1803]: I0320 21:32:01.434354 1803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" (UID: "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:32:01.435768 kubelet[1803]: I0320 21:32:01.434374 1803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-cilium-config-path\") pod \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " Mar 20 21:32:01.435768 kubelet[1803]: I0320 21:32:01.434445 1803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-cilium-cgroup\") pod \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " Mar 20 21:32:01.435768 kubelet[1803]: I0320 21:32:01.434470 1803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-hubble-tls\") pod \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " Mar 20 21:32:01.435768 kubelet[1803]: I0320 21:32:01.434486 1803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-bpf-maps\") pod \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\" (UID: \"0cd73910-5b7f-4ea9-871e-85e6f55f7cb3\") " Mar 20 21:32:01.435768 kubelet[1803]: I0320 21:32:01.434533 1803 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-cilium-run\") on node \"10.0.0.139\" DevicePath \"\"" Mar 20 21:32:01.435921 kubelet[1803]: I0320 21:32:01.434542 1803 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-lib-modules\") on node \"10.0.0.139\" DevicePath \"\"" Mar 20 21:32:01.435921 kubelet[1803]: I0320 21:32:01.434597 1803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" (UID: "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:32:01.435921 kubelet[1803]: I0320 21:32:01.434614 1803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" (UID: "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:32:01.435921 kubelet[1803]: I0320 21:32:01.435527 1803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" (UID: "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:32:01.435921 kubelet[1803]: I0320 21:32:01.435568 1803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-hostproc" (OuterVolumeSpecName: "hostproc") pod "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" (UID: "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:32:01.436042 kubelet[1803]: I0320 21:32:01.435643 1803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-cni-path" (OuterVolumeSpecName: "cni-path") pod "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" (UID: "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:32:01.436042 kubelet[1803]: I0320 21:32:01.435676 1803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" (UID: "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:32:01.436042 kubelet[1803]: I0320 21:32:01.435696 1803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" (UID: "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:32:01.436042 kubelet[1803]: I0320 21:32:01.435714 1803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" (UID: "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:32:01.437427 kubelet[1803]: I0320 21:32:01.437383 1803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" (UID: "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 21:32:01.438415 kubelet[1803]: I0320 21:32:01.438394 1803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" (UID: "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 21:32:01.439062 systemd[1]: var-lib-kubelet-pods-0cd73910\x2d5b7f\x2d4ea9\x2d871e\x2d85e6f55f7cb3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 20 21:32:01.439540 kubelet[1803]: I0320 21:32:01.439514 1803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-kube-api-access-8h775" (OuterVolumeSpecName: "kube-api-access-8h775") pod "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" (UID: "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3"). InnerVolumeSpecName "kube-api-access-8h775". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 21:32:01.440137 kubelet[1803]: I0320 21:32:01.439876 1803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" (UID: "0cd73910-5b7f-4ea9-871e-85e6f55f7cb3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 21:32:01.441743 systemd[1]: var-lib-kubelet-pods-0cd73910\x2d5b7f\x2d4ea9\x2d871e\x2d85e6f55f7cb3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8h775.mount: Deactivated successfully. Mar 20 21:32:01.441845 systemd[1]: var-lib-kubelet-pods-0cd73910\x2d5b7f\x2d4ea9\x2d871e\x2d85e6f55f7cb3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 20 21:32:01.495978 kubelet[1803]: I0320 21:32:01.495954 1803 scope.go:117] "RemoveContainer" containerID="213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d" Mar 20 21:32:01.497621 containerd[1474]: time="2025-03-20T21:32:01.497553626Z" level=info msg="RemoveContainer for \"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d\"" Mar 20 21:32:01.501305 systemd[1]: Removed slice kubepods-burstable-pod0cd73910_5b7f_4ea9_871e_85e6f55f7cb3.slice - libcontainer container kubepods-burstable-pod0cd73910_5b7f_4ea9_871e_85e6f55f7cb3.slice. Mar 20 21:32:01.501516 systemd[1]: kubepods-burstable-pod0cd73910_5b7f_4ea9_871e_85e6f55f7cb3.slice: Consumed 7.475s CPU time, 122.1M memory peak, 144K read from disk, 13.3M written to disk. Mar 20 21:32:01.502158 containerd[1474]: time="2025-03-20T21:32:01.502085519Z" level=info msg="RemoveContainer for \"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d\" returns successfully" Mar 20 21:32:01.502258 kubelet[1803]: I0320 21:32:01.502238 1803 scope.go:117] "RemoveContainer" containerID="bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79" Mar 20 21:32:01.503371 containerd[1474]: time="2025-03-20T21:32:01.503338816Z" level=info msg="RemoveContainer for \"bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79\"" Mar 20 21:32:01.507629 containerd[1474]: time="2025-03-20T21:32:01.507602314Z" level=info msg="RemoveContainer for \"bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79\" returns successfully" Mar 20 21:32:01.507821 kubelet[1803]: I0320 21:32:01.507739 1803 scope.go:117] "RemoveContainer" containerID="ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346" Mar 20 21:32:01.509827 containerd[1474]: time="2025-03-20T21:32:01.509802533Z" level=info msg="RemoveContainer for \"ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346\"" Mar 20 21:32:01.513775 containerd[1474]: time="2025-03-20T21:32:01.513748474Z" level=info msg="RemoveContainer for \"ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346\" returns successfully" Mar 20 21:32:01.513888 kubelet[1803]: I0320 21:32:01.513867 1803 scope.go:117] "RemoveContainer" containerID="0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc" Mar 20 21:32:01.515111 containerd[1474]: time="2025-03-20T21:32:01.515083855Z" level=info msg="RemoveContainer for \"0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc\"" Mar 20 21:32:01.518757 containerd[1474]: time="2025-03-20T21:32:01.518724292Z" level=info msg="RemoveContainer for \"0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc\" returns successfully" Mar 20 21:32:01.518958 kubelet[1803]: I0320 21:32:01.518882 1803 scope.go:117] "RemoveContainer" containerID="7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d" Mar 20 21:32:01.520074 containerd[1474]: time="2025-03-20T21:32:01.520044986Z" level=info msg="RemoveContainer for \"7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d\"" Mar 20 21:32:01.523196 containerd[1474]: time="2025-03-20T21:32:01.523170735Z" level=info msg="RemoveContainer for \"7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d\" returns successfully" Mar 20 21:32:01.523336 kubelet[1803]: I0320 21:32:01.523316 1803 scope.go:117] "RemoveContainer" containerID="213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d" Mar 20 21:32:01.523510 containerd[1474]: time="2025-03-20T21:32:01.523451553Z" level=error msg="ContainerStatus for \"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d\": not found" Mar 20 21:32:01.523680 kubelet[1803]: E0320 21:32:01.523651 1803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d\": not found" containerID="213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d" Mar 20 21:32:01.523748 kubelet[1803]: I0320 21:32:01.523681 1803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d"} err="failed to get container status \"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d\": rpc error: code = NotFound desc = an error occurred when try to find container \"213790fe45852935ecc275e217e95865e6a5b54c3cd0a72370f4aad3515e1c3d\": not found" Mar 20 21:32:01.523814 kubelet[1803]: I0320 21:32:01.523748 1803 scope.go:117] "RemoveContainer" containerID="bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79" Mar 20 21:32:01.523912 containerd[1474]: time="2025-03-20T21:32:01.523877845Z" level=error msg="ContainerStatus for \"bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79\": not found" Mar 20 21:32:01.524003 kubelet[1803]: E0320 21:32:01.523984 1803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79\": not found" containerID="bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79" Mar 20 21:32:01.524040 kubelet[1803]: I0320 21:32:01.524001 1803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79"} err="failed to get container status \"bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79\": rpc error: code = NotFound desc = an error occurred when try to find container \"bced6c278c4b67879c12170e8c3037ab0d0582bd3aa2fb81b6798e406ade4a79\": not found" Mar 20 21:32:01.524040 kubelet[1803]: I0320 21:32:01.524013 1803 scope.go:117] "RemoveContainer" containerID="ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346" Mar 20 21:32:01.524192 containerd[1474]: time="2025-03-20T21:32:01.524152041Z" level=error msg="ContainerStatus for \"ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346\": not found" Mar 20 21:32:01.524291 kubelet[1803]: E0320 21:32:01.524268 1803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346\": not found" containerID="ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346" Mar 20 21:32:01.524291 kubelet[1803]: I0320 21:32:01.524287 1803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346"} err="failed to get container status \"ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec94aace5a7ff2d1006ac190fececd17e6b67ed21695316af26f6be7a1d20346\": not found" Mar 20 21:32:01.524352 kubelet[1803]: I0320 21:32:01.524299 1803 scope.go:117] "RemoveContainer" containerID="0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc" Mar 20 21:32:01.524449 containerd[1474]: time="2025-03-20T21:32:01.524421457Z" level=error msg="ContainerStatus for \"0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc\": not found" Mar 20 21:32:01.524563 kubelet[1803]: E0320 21:32:01.524533 1803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc\": not found" containerID="0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc" Mar 20 21:32:01.524616 kubelet[1803]: I0320 21:32:01.524552 1803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc"} err="failed to get container status \"0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"0082939cda47df55f2ae5336f163f139e870e58845a6fe234c55cc5507bea5dc\": not found" Mar 20 21:32:01.524616 kubelet[1803]: I0320 21:32:01.524576 1803 scope.go:117] "RemoveContainer" containerID="7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d" Mar 20 21:32:01.524769 containerd[1474]: time="2025-03-20T21:32:01.524724076Z" level=error msg="ContainerStatus for \"7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d\": not found" Mar 20 21:32:01.524853 kubelet[1803]: E0320 21:32:01.524830 1803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d\": not found" containerID="7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d" Mar 20 21:32:01.524892 kubelet[1803]: I0320 21:32:01.524859 1803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d"} err="failed to get container status \"7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d\": rpc error: code = NotFound desc = an error occurred when try to find container \"7175f2d3870fa27789c607b8055dfc53257563abf3bfcaf0175c693dc8347d4d\": not found" Mar 20 21:32:01.535081 kubelet[1803]: I0320 21:32:01.535050 1803 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8h775\" (UniqueName: \"kubernetes.io/projected/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-kube-api-access-8h775\") on node \"10.0.0.139\" DevicePath \"\"" Mar 20 21:32:01.535081 kubelet[1803]: I0320 21:32:01.535068 1803 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-cni-path\") on node \"10.0.0.139\" DevicePath \"\"" Mar 20 21:32:01.535081 kubelet[1803]: I0320 21:32:01.535077 1803 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-host-proc-sys-net\") on node \"10.0.0.139\" DevicePath \"\"" Mar 20 21:32:01.535168 kubelet[1803]: I0320 21:32:01.535085 1803 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-cilium-config-path\") on node \"10.0.0.139\" DevicePath \"\"" Mar 20 21:32:01.535168 kubelet[1803]: I0320 21:32:01.535094 1803 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-cilium-cgroup\") on node \"10.0.0.139\" DevicePath \"\"" Mar 20 21:32:01.535168 kubelet[1803]: I0320 21:32:01.535102 1803 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-hubble-tls\") on node \"10.0.0.139\" DevicePath \"\"" Mar 20 21:32:01.535168 kubelet[1803]: I0320 21:32:01.535110 1803 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-bpf-maps\") on node \"10.0.0.139\" DevicePath \"\"" Mar 20 21:32:01.535168 kubelet[1803]: I0320 21:32:01.535119 1803 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-xtables-lock\") on node \"10.0.0.139\" DevicePath \"\"" Mar 20 21:32:01.535168 kubelet[1803]: I0320 21:32:01.535126 1803 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-etc-cni-netd\") on node \"10.0.0.139\" DevicePath \"\"" Mar 20 21:32:01.535168 kubelet[1803]: I0320 21:32:01.535134 1803 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-hostproc\") on node \"10.0.0.139\" DevicePath \"\"" Mar 20 21:32:01.535168 kubelet[1803]: I0320 21:32:01.535145 1803 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-host-proc-sys-kernel\") on node \"10.0.0.139\" DevicePath \"\"" Mar 20 21:32:01.535342 kubelet[1803]: I0320 21:32:01.535153 1803 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3-clustermesh-secrets\") on node \"10.0.0.139\" DevicePath \"\"" Mar 20 21:32:01.548323 kubelet[1803]: E0320 21:32:01.548291 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:32:02.368895 kubelet[1803]: I0320 21:32:02.368849 1803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" path="/var/lib/kubelet/pods/0cd73910-5b7f-4ea9-871e-85e6f55f7cb3/volumes" Mar 20 21:32:02.548675 kubelet[1803]: E0320 21:32:02.548635 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:32:03.492458 kubelet[1803]: E0320 21:32:03.492397 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:32:03.512778 containerd[1474]: time="2025-03-20T21:32:03.512736556Z" level=info msg="StopPodSandbox for \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\"" Mar 20 21:32:03.513192 containerd[1474]: time="2025-03-20T21:32:03.512881639Z" level=info msg="TearDown network for sandbox \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\" successfully" Mar 20 21:32:03.513192 containerd[1474]: time="2025-03-20T21:32:03.512893331Z" level=info msg="StopPodSandbox for \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\" returns successfully" Mar 20 21:32:03.513247 containerd[1474]: time="2025-03-20T21:32:03.513189388Z" level=info msg="RemovePodSandbox for \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\"" Mar 20 21:32:03.513247 containerd[1474]: time="2025-03-20T21:32:03.513218082Z" level=info msg="Forcibly stopping sandbox \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\"" Mar 20 21:32:03.513330 containerd[1474]: time="2025-03-20T21:32:03.513314262Z" level=info msg="TearDown network for sandbox \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\" successfully" Mar 20 21:32:03.514436 containerd[1474]: time="2025-03-20T21:32:03.514411315Z" level=info msg="Ensure that sandbox f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08 in task-service has been cleanup successfully" Mar 20 21:32:03.517612 containerd[1474]: time="2025-03-20T21:32:03.517567128Z" level=info msg="RemovePodSandbox \"f0beb1afe94e17114aa9fe3414b958d6abcbb29aff9aea4a9600d57fb4508e08\" returns successfully" Mar 20 21:32:03.549410 kubelet[1803]: E0320 21:32:03.549344 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:32:03.588406 kubelet[1803]: E0320 21:32:03.588363 1803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" containerName="mount-cgroup" Mar 20 21:32:03.588406 kubelet[1803]: E0320 21:32:03.588386 1803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" containerName="apply-sysctl-overwrites" Mar 20 21:32:03.588406 kubelet[1803]: E0320 21:32:03.588393 1803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" containerName="clean-cilium-state" Mar 20 21:32:03.588406 kubelet[1803]: E0320 21:32:03.588399 1803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" containerName="mount-bpf-fs" Mar 20 21:32:03.588406 kubelet[1803]: E0320 21:32:03.588406 1803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" containerName="cilium-agent" Mar 20 21:32:03.588553 kubelet[1803]: I0320 21:32:03.588424 1803 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cd73910-5b7f-4ea9-871e-85e6f55f7cb3" containerName="cilium-agent" Mar 20 21:32:03.594401 systemd[1]: Created slice kubepods-besteffort-pod68e9db83_fbc3_455f_8cc6_d128607001f4.slice - libcontainer container kubepods-besteffort-pod68e9db83_fbc3_455f_8cc6_d128607001f4.slice. Mar 20 21:32:03.608526 systemd[1]: Created slice kubepods-burstable-pod70f3df32_367d_4e50_a892_65cde6c1a18b.slice - libcontainer container kubepods-burstable-pod70f3df32_367d_4e50_a892_65cde6c1a18b.slice. Mar 20 21:32:03.746973 kubelet[1803]: I0320 21:32:03.746792 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70f3df32-367d-4e50-a892-65cde6c1a18b-cilium-cgroup\") pod \"cilium-rl4pq\" (UID: \"70f3df32-367d-4e50-a892-65cde6c1a18b\") " pod="kube-system/cilium-rl4pq" Mar 20 21:32:03.746973 kubelet[1803]: I0320 21:32:03.746844 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70f3df32-367d-4e50-a892-65cde6c1a18b-etc-cni-netd\") pod \"cilium-rl4pq\" (UID: \"70f3df32-367d-4e50-a892-65cde6c1a18b\") " pod="kube-system/cilium-rl4pq" Mar 20 21:32:03.746973 kubelet[1803]: I0320 21:32:03.746863 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70f3df32-367d-4e50-a892-65cde6c1a18b-xtables-lock\") pod \"cilium-rl4pq\" (UID: \"70f3df32-367d-4e50-a892-65cde6c1a18b\") " pod="kube-system/cilium-rl4pq" Mar 20 21:32:03.746973 kubelet[1803]: I0320 21:32:03.746877 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/70f3df32-367d-4e50-a892-65cde6c1a18b-cilium-ipsec-secrets\") pod \"cilium-rl4pq\" (UID: \"70f3df32-367d-4e50-a892-65cde6c1a18b\") " pod="kube-system/cilium-rl4pq" Mar 20 21:32:03.746973 kubelet[1803]: I0320 21:32:03.746893 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70f3df32-367d-4e50-a892-65cde6c1a18b-cni-path\") pod \"cilium-rl4pq\" (UID: \"70f3df32-367d-4e50-a892-65cde6c1a18b\") " pod="kube-system/cilium-rl4pq" Mar 20 21:32:03.746973 kubelet[1803]: I0320 21:32:03.746908 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70f3df32-367d-4e50-a892-65cde6c1a18b-lib-modules\") pod \"cilium-rl4pq\" (UID: \"70f3df32-367d-4e50-a892-65cde6c1a18b\") " pod="kube-system/cilium-rl4pq" Mar 20 21:32:03.747248 kubelet[1803]: I0320 21:32:03.746969 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70f3df32-367d-4e50-a892-65cde6c1a18b-clustermesh-secrets\") pod \"cilium-rl4pq\" (UID: \"70f3df32-367d-4e50-a892-65cde6c1a18b\") " pod="kube-system/cilium-rl4pq" Mar 20 21:32:03.747248 kubelet[1803]: I0320 21:32:03.747075 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70f3df32-367d-4e50-a892-65cde6c1a18b-host-proc-sys-net\") pod \"cilium-rl4pq\" (UID: \"70f3df32-367d-4e50-a892-65cde6c1a18b\") " pod="kube-system/cilium-rl4pq" Mar 20 21:32:03.747248 kubelet[1803]: I0320 21:32:03.747120 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70f3df32-367d-4e50-a892-65cde6c1a18b-host-proc-sys-kernel\") pod \"cilium-rl4pq\" (UID: \"70f3df32-367d-4e50-a892-65cde6c1a18b\") " pod="kube-system/cilium-rl4pq" Mar 20 21:32:03.747248 kubelet[1803]: I0320 21:32:03.747154 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70f3df32-367d-4e50-a892-65cde6c1a18b-cilium-run\") pod \"cilium-rl4pq\" (UID: \"70f3df32-367d-4e50-a892-65cde6c1a18b\") " pod="kube-system/cilium-rl4pq" Mar 20 21:32:03.747248 kubelet[1803]: I0320 21:32:03.747168 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70f3df32-367d-4e50-a892-65cde6c1a18b-bpf-maps\") pod \"cilium-rl4pq\" (UID: \"70f3df32-367d-4e50-a892-65cde6c1a18b\") " pod="kube-system/cilium-rl4pq" Mar 20 21:32:03.747248 kubelet[1803]: I0320 21:32:03.747187 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70f3df32-367d-4e50-a892-65cde6c1a18b-hostproc\") pod \"cilium-rl4pq\" (UID: \"70f3df32-367d-4e50-a892-65cde6c1a18b\") " pod="kube-system/cilium-rl4pq" Mar 20 21:32:03.747717 kubelet[1803]: I0320 21:32:03.747205 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70f3df32-367d-4e50-a892-65cde6c1a18b-cilium-config-path\") pod \"cilium-rl4pq\" (UID: \"70f3df32-367d-4e50-a892-65cde6c1a18b\") " pod="kube-system/cilium-rl4pq" Mar 20 21:32:03.747717 kubelet[1803]: I0320 21:32:03.747219 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70f3df32-367d-4e50-a892-65cde6c1a18b-hubble-tls\") pod \"cilium-rl4pq\" (UID: \"70f3df32-367d-4e50-a892-65cde6c1a18b\") " pod="kube-system/cilium-rl4pq" Mar 20 21:32:03.747717 kubelet[1803]: I0320 21:32:03.747236 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcvc7\" (UniqueName: \"kubernetes.io/projected/68e9db83-fbc3-455f-8cc6-d128607001f4-kube-api-access-rcvc7\") pod \"cilium-operator-5d85765b45-4gfwp\" (UID: \"68e9db83-fbc3-455f-8cc6-d128607001f4\") " pod="kube-system/cilium-operator-5d85765b45-4gfwp" Mar 20 21:32:03.747717 kubelet[1803]: I0320 21:32:03.747264 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2zl5\" (UniqueName: \"kubernetes.io/projected/70f3df32-367d-4e50-a892-65cde6c1a18b-kube-api-access-t2zl5\") pod \"cilium-rl4pq\" (UID: \"70f3df32-367d-4e50-a892-65cde6c1a18b\") " pod="kube-system/cilium-rl4pq" Mar 20 21:32:03.747717 kubelet[1803]: I0320 21:32:03.747278 1803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68e9db83-fbc3-455f-8cc6-d128607001f4-cilium-config-path\") pod \"cilium-operator-5d85765b45-4gfwp\" (UID: \"68e9db83-fbc3-455f-8cc6-d128607001f4\") " pod="kube-system/cilium-operator-5d85765b45-4gfwp" Mar 20 21:32:03.896740 containerd[1474]: time="2025-03-20T21:32:03.896698142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-4gfwp,Uid:68e9db83-fbc3-455f-8cc6-d128607001f4,Namespace:kube-system,Attempt:0,}" Mar 20 21:32:03.912412 containerd[1474]: time="2025-03-20T21:32:03.912357172Z" level=info msg="connecting to shim dcaa47c15c3700f0c975c17904a3193570973b2ec51beb99ff34e6af87eaae03" address="unix:///run/containerd/s/3133ecbf30735225f9b14c8d168bf8d9d324249e616051ec072f4ff768593b49" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:32:03.920662 containerd[1474]: time="2025-03-20T21:32:03.920569051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rl4pq,Uid:70f3df32-367d-4e50-a892-65cde6c1a18b,Namespace:kube-system,Attempt:0,}" Mar 20 21:32:03.935767 containerd[1474]: time="2025-03-20T21:32:03.935713013Z" level=info msg="connecting to shim 466fb1f9a8cb865648651b49f4f0abc840b3450f48f8ad2a77be53a21bb4e41f" address="unix:///run/containerd/s/bd78e6913d27f462965af7017e7769e76edf8ee2ce4b0aa60e57a3af7c5fc006" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:32:03.937773 systemd[1]: Started cri-containerd-dcaa47c15c3700f0c975c17904a3193570973b2ec51beb99ff34e6af87eaae03.scope - libcontainer container dcaa47c15c3700f0c975c17904a3193570973b2ec51beb99ff34e6af87eaae03. Mar 20 21:32:03.960709 systemd[1]: Started cri-containerd-466fb1f9a8cb865648651b49f4f0abc840b3450f48f8ad2a77be53a21bb4e41f.scope - libcontainer container 466fb1f9a8cb865648651b49f4f0abc840b3450f48f8ad2a77be53a21bb4e41f. Mar 20 21:32:03.978258 containerd[1474]: time="2025-03-20T21:32:03.978217364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-4gfwp,Uid:68e9db83-fbc3-455f-8cc6-d128607001f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcaa47c15c3700f0c975c17904a3193570973b2ec51beb99ff34e6af87eaae03\"" Mar 20 21:32:03.979879 containerd[1474]: time="2025-03-20T21:32:03.979856676Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 20 21:32:03.986569 containerd[1474]: time="2025-03-20T21:32:03.986523300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rl4pq,Uid:70f3df32-367d-4e50-a892-65cde6c1a18b,Namespace:kube-system,Attempt:0,} returns sandbox id \"466fb1f9a8cb865648651b49f4f0abc840b3450f48f8ad2a77be53a21bb4e41f\"" Mar 20 21:32:03.988600 containerd[1474]: time="2025-03-20T21:32:03.988548206Z" level=info msg="CreateContainer within sandbox \"466fb1f9a8cb865648651b49f4f0abc840b3450f48f8ad2a77be53a21bb4e41f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 20 21:32:03.994920 containerd[1474]: time="2025-03-20T21:32:03.994889348Z" level=info msg="Container 02e5c6693bfdd72996b9beba18d8650ca5dd761b2fa9a8259cce94b0b690caa1: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:32:04.002880 containerd[1474]: time="2025-03-20T21:32:04.002783319Z" level=info msg="CreateContainer within sandbox \"466fb1f9a8cb865648651b49f4f0abc840b3450f48f8ad2a77be53a21bb4e41f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"02e5c6693bfdd72996b9beba18d8650ca5dd761b2fa9a8259cce94b0b690caa1\"" Mar 20 21:32:04.003275 containerd[1474]: time="2025-03-20T21:32:04.003249185Z" level=info msg="StartContainer for \"02e5c6693bfdd72996b9beba18d8650ca5dd761b2fa9a8259cce94b0b690caa1\"" Mar 20 21:32:04.004135 containerd[1474]: time="2025-03-20T21:32:04.004109301Z" level=info msg="connecting to shim 02e5c6693bfdd72996b9beba18d8650ca5dd761b2fa9a8259cce94b0b690caa1" address="unix:///run/containerd/s/bd78e6913d27f462965af7017e7769e76edf8ee2ce4b0aa60e57a3af7c5fc006" protocol=ttrpc version=3 Mar 20 21:32:04.023742 systemd[1]: Started cri-containerd-02e5c6693bfdd72996b9beba18d8650ca5dd761b2fa9a8259cce94b0b690caa1.scope - libcontainer container 02e5c6693bfdd72996b9beba18d8650ca5dd761b2fa9a8259cce94b0b690caa1. Mar 20 21:32:04.051423 containerd[1474]: time="2025-03-20T21:32:04.051378086Z" level=info msg="StartContainer for \"02e5c6693bfdd72996b9beba18d8650ca5dd761b2fa9a8259cce94b0b690caa1\" returns successfully" Mar 20 21:32:04.059979 systemd[1]: cri-containerd-02e5c6693bfdd72996b9beba18d8650ca5dd761b2fa9a8259cce94b0b690caa1.scope: Deactivated successfully. Mar 20 21:32:04.061961 containerd[1474]: time="2025-03-20T21:32:04.061918329Z" level=info msg="received exit event container_id:\"02e5c6693bfdd72996b9beba18d8650ca5dd761b2fa9a8259cce94b0b690caa1\" id:\"02e5c6693bfdd72996b9beba18d8650ca5dd761b2fa9a8259cce94b0b690caa1\" pid:3457 exited_at:{seconds:1742506324 nanos:61669341}" Mar 20 21:32:04.062090 containerd[1474]: time="2025-03-20T21:32:04.062061167Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02e5c6693bfdd72996b9beba18d8650ca5dd761b2fa9a8259cce94b0b690caa1\" id:\"02e5c6693bfdd72996b9beba18d8650ca5dd761b2fa9a8259cce94b0b690caa1\" pid:3457 exited_at:{seconds:1742506324 nanos:61669341}" Mar 20 21:32:04.466701 kubelet[1803]: E0320 21:32:04.466659 1803 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 20 21:32:04.504022 containerd[1474]: time="2025-03-20T21:32:04.503982992Z" level=info msg="CreateContainer within sandbox \"466fb1f9a8cb865648651b49f4f0abc840b3450f48f8ad2a77be53a21bb4e41f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 20 21:32:04.510320 containerd[1474]: time="2025-03-20T21:32:04.510288595Z" level=info msg="Container 8043d62a68068421c010e13b596841890f3899d6a44897d404feb4aa91642dfa: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:32:04.516891 containerd[1474]: time="2025-03-20T21:32:04.516855659Z" level=info msg="CreateContainer within sandbox \"466fb1f9a8cb865648651b49f4f0abc840b3450f48f8ad2a77be53a21bb4e41f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8043d62a68068421c010e13b596841890f3899d6a44897d404feb4aa91642dfa\"" Mar 20 21:32:04.517285 containerd[1474]: time="2025-03-20T21:32:04.517250932Z" level=info msg="StartContainer for \"8043d62a68068421c010e13b596841890f3899d6a44897d404feb4aa91642dfa\"" Mar 20 21:32:04.518204 containerd[1474]: time="2025-03-20T21:32:04.518172255Z" level=info msg="connecting to shim 8043d62a68068421c010e13b596841890f3899d6a44897d404feb4aa91642dfa" address="unix:///run/containerd/s/bd78e6913d27f462965af7017e7769e76edf8ee2ce4b0aa60e57a3af7c5fc006" protocol=ttrpc version=3 Mar 20 21:32:04.540722 systemd[1]: Started cri-containerd-8043d62a68068421c010e13b596841890f3899d6a44897d404feb4aa91642dfa.scope - libcontainer container 8043d62a68068421c010e13b596841890f3899d6a44897d404feb4aa91642dfa. Mar 20 21:32:04.550349 kubelet[1803]: E0320 21:32:04.550316 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:32:04.570448 containerd[1474]: time="2025-03-20T21:32:04.570397575Z" level=info msg="StartContainer for \"8043d62a68068421c010e13b596841890f3899d6a44897d404feb4aa91642dfa\" returns successfully" Mar 20 21:32:04.576462 systemd[1]: cri-containerd-8043d62a68068421c010e13b596841890f3899d6a44897d404feb4aa91642dfa.scope: Deactivated successfully. Mar 20 21:32:04.577053 containerd[1474]: time="2025-03-20T21:32:04.576917290Z" level=info msg="received exit event container_id:\"8043d62a68068421c010e13b596841890f3899d6a44897d404feb4aa91642dfa\" id:\"8043d62a68068421c010e13b596841890f3899d6a44897d404feb4aa91642dfa\" pid:3503 exited_at:{seconds:1742506324 nanos:576684152}" Mar 20 21:32:04.577053 containerd[1474]: time="2025-03-20T21:32:04.577004755Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8043d62a68068421c010e13b596841890f3899d6a44897d404feb4aa91642dfa\" id:\"8043d62a68068421c010e13b596841890f3899d6a44897d404feb4aa91642dfa\" pid:3503 exited_at:{seconds:1742506324 nanos:576684152}" Mar 20 21:32:05.509347 containerd[1474]: time="2025-03-20T21:32:05.509306423Z" level=info msg="CreateContainer within sandbox \"466fb1f9a8cb865648651b49f4f0abc840b3450f48f8ad2a77be53a21bb4e41f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 20 21:32:05.518108 containerd[1474]: time="2025-03-20T21:32:05.518070124Z" level=info msg="Container fe793ef99a28af48d2a5d3c37f2faf775a67656282cc15b9a57d1dd3612d99f6: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:32:05.527140 containerd[1474]: time="2025-03-20T21:32:05.527112209Z" level=info msg="CreateContainer within sandbox \"466fb1f9a8cb865648651b49f4f0abc840b3450f48f8ad2a77be53a21bb4e41f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fe793ef99a28af48d2a5d3c37f2faf775a67656282cc15b9a57d1dd3612d99f6\"" Mar 20 21:32:05.527517 containerd[1474]: time="2025-03-20T21:32:05.527496330Z" level=info msg="StartContainer for \"fe793ef99a28af48d2a5d3c37f2faf775a67656282cc15b9a57d1dd3612d99f6\"" Mar 20 21:32:05.528772 containerd[1474]: time="2025-03-20T21:32:05.528746941Z" level=info msg="connecting to shim fe793ef99a28af48d2a5d3c37f2faf775a67656282cc15b9a57d1dd3612d99f6" address="unix:///run/containerd/s/bd78e6913d27f462965af7017e7769e76edf8ee2ce4b0aa60e57a3af7c5fc006" protocol=ttrpc version=3 Mar 20 21:32:05.550815 systemd[1]: Started cri-containerd-fe793ef99a28af48d2a5d3c37f2faf775a67656282cc15b9a57d1dd3612d99f6.scope - libcontainer container fe793ef99a28af48d2a5d3c37f2faf775a67656282cc15b9a57d1dd3612d99f6. Mar 20 21:32:05.551539 kubelet[1803]: E0320 21:32:05.551502 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:32:05.591187 containerd[1474]: time="2025-03-20T21:32:05.591142601Z" level=info msg="StartContainer for \"fe793ef99a28af48d2a5d3c37f2faf775a67656282cc15b9a57d1dd3612d99f6\" returns successfully" Mar 20 21:32:05.591647 systemd[1]: cri-containerd-fe793ef99a28af48d2a5d3c37f2faf775a67656282cc15b9a57d1dd3612d99f6.scope: Deactivated successfully. Mar 20 21:32:05.593137 containerd[1474]: time="2025-03-20T21:32:05.592781391Z" level=info msg="received exit event container_id:\"fe793ef99a28af48d2a5d3c37f2faf775a67656282cc15b9a57d1dd3612d99f6\" id:\"fe793ef99a28af48d2a5d3c37f2faf775a67656282cc15b9a57d1dd3612d99f6\" pid:3547 exited_at:{seconds:1742506325 nanos:592492709}" Mar 20 21:32:05.593275 containerd[1474]: time="2025-03-20T21:32:05.593238260Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe793ef99a28af48d2a5d3c37f2faf775a67656282cc15b9a57d1dd3612d99f6\" id:\"fe793ef99a28af48d2a5d3c37f2faf775a67656282cc15b9a57d1dd3612d99f6\" pid:3547 exited_at:{seconds:1742506325 nanos:592492709}" Mar 20 21:32:05.614188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe793ef99a28af48d2a5d3c37f2faf775a67656282cc15b9a57d1dd3612d99f6-rootfs.mount: Deactivated successfully. Mar 20 21:32:05.955146 kubelet[1803]: I0320 21:32:05.955073 1803 setters.go:600] "Node became not ready" node="10.0.0.139" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-20T21:32:05Z","lastTransitionTime":"2025-03-20T21:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 20 21:32:06.321953 containerd[1474]: time="2025-03-20T21:32:06.321830790Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:32:06.322632 containerd[1474]: time="2025-03-20T21:32:06.322552315Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 20 21:32:06.323821 containerd[1474]: time="2025-03-20T21:32:06.323775614Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:32:06.324772 containerd[1474]: time="2025-03-20T21:32:06.324738063Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.34485117s" Mar 20 21:32:06.324772 containerd[1474]: time="2025-03-20T21:32:06.324771395Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 20 21:32:06.326733 containerd[1474]: time="2025-03-20T21:32:06.326700460Z" level=info msg="CreateContainer within sandbox \"dcaa47c15c3700f0c975c17904a3193570973b2ec51beb99ff34e6af87eaae03\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 20 21:32:06.333678 containerd[1474]: time="2025-03-20T21:32:06.333641413Z" level=info msg="Container 8853f618e67615519ae5afdd32687e0a1a2e33de2dcf3b8b5e342b36b3bae5b0: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:32:06.341396 containerd[1474]: time="2025-03-20T21:32:06.341360800Z" level=info msg="CreateContainer within sandbox \"dcaa47c15c3700f0c975c17904a3193570973b2ec51beb99ff34e6af87eaae03\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8853f618e67615519ae5afdd32687e0a1a2e33de2dcf3b8b5e342b36b3bae5b0\"" Mar 20 21:32:06.341724 containerd[1474]: time="2025-03-20T21:32:06.341702913Z" level=info msg="StartContainer for \"8853f618e67615519ae5afdd32687e0a1a2e33de2dcf3b8b5e342b36b3bae5b0\"" Mar 20 21:32:06.344641 containerd[1474]: time="2025-03-20T21:32:06.342502035Z" level=info msg="connecting to shim 8853f618e67615519ae5afdd32687e0a1a2e33de2dcf3b8b5e342b36b3bae5b0" address="unix:///run/containerd/s/3133ecbf30735225f9b14c8d168bf8d9d324249e616051ec072f4ff768593b49" protocol=ttrpc version=3 Mar 20 21:32:06.369811 systemd[1]: Started cri-containerd-8853f618e67615519ae5afdd32687e0a1a2e33de2dcf3b8b5e342b36b3bae5b0.scope - libcontainer container 8853f618e67615519ae5afdd32687e0a1a2e33de2dcf3b8b5e342b36b3bae5b0. Mar 20 21:32:06.552042 kubelet[1803]: E0320 21:32:06.551978 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:32:06.625213 containerd[1474]: time="2025-03-20T21:32:06.625169377Z" level=info msg="StartContainer for \"8853f618e67615519ae5afdd32687e0a1a2e33de2dcf3b8b5e342b36b3bae5b0\" returns successfully" Mar 20 21:32:06.629720 containerd[1474]: time="2025-03-20T21:32:06.629693829Z" level=info msg="CreateContainer within sandbox \"466fb1f9a8cb865648651b49f4f0abc840b3450f48f8ad2a77be53a21bb4e41f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 20 21:32:06.641210 containerd[1474]: time="2025-03-20T21:32:06.641152502Z" level=info msg="Container 9f396f5b764afa05a4b836367bb590988386206e12916fc6f224ae54f87b0eb8: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:32:06.648685 containerd[1474]: time="2025-03-20T21:32:06.648644691Z" level=info msg="CreateContainer within sandbox \"466fb1f9a8cb865648651b49f4f0abc840b3450f48f8ad2a77be53a21bb4e41f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9f396f5b764afa05a4b836367bb590988386206e12916fc6f224ae54f87b0eb8\"" Mar 20 21:32:06.649162 containerd[1474]: time="2025-03-20T21:32:06.649122759Z" level=info msg="StartContainer for \"9f396f5b764afa05a4b836367bb590988386206e12916fc6f224ae54f87b0eb8\"" Mar 20 21:32:06.650172 containerd[1474]: time="2025-03-20T21:32:06.650143118Z" level=info msg="connecting to shim 9f396f5b764afa05a4b836367bb590988386206e12916fc6f224ae54f87b0eb8" address="unix:///run/containerd/s/bd78e6913d27f462965af7017e7769e76edf8ee2ce4b0aa60e57a3af7c5fc006" protocol=ttrpc version=3 Mar 20 21:32:06.650951 kubelet[1803]: I0320 21:32:06.650878 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-4gfwp" podStartSLOduration=1.304683554 podStartE2EDuration="3.650863321s" podCreationTimestamp="2025-03-20 21:32:03 +0000 UTC" firstStartedPulling="2025-03-20 21:32:03.979362718 +0000 UTC m=+60.854583253" lastFinishedPulling="2025-03-20 21:32:06.325542485 +0000 UTC m=+63.200763020" observedRunningTime="2025-03-20 21:32:06.650673133 +0000 UTC m=+63.525893688" watchObservedRunningTime="2025-03-20 21:32:06.650863321 +0000 UTC m=+63.526083856" Mar 20 21:32:06.669767 systemd[1]: Started cri-containerd-9f396f5b764afa05a4b836367bb590988386206e12916fc6f224ae54f87b0eb8.scope - libcontainer container 9f396f5b764afa05a4b836367bb590988386206e12916fc6f224ae54f87b0eb8. Mar 20 21:32:06.696298 systemd[1]: cri-containerd-9f396f5b764afa05a4b836367bb590988386206e12916fc6f224ae54f87b0eb8.scope: Deactivated successfully. Mar 20 21:32:06.696615 containerd[1474]: time="2025-03-20T21:32:06.696505250Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f396f5b764afa05a4b836367bb590988386206e12916fc6f224ae54f87b0eb8\" id:\"9f396f5b764afa05a4b836367bb590988386206e12916fc6f224ae54f87b0eb8\" pid:3634 exited_at:{seconds:1742506326 nanos:696156104}" Mar 20 21:32:06.699650 containerd[1474]: time="2025-03-20T21:32:06.699612018Z" level=info msg="received exit event container_id:\"9f396f5b764afa05a4b836367bb590988386206e12916fc6f224ae54f87b0eb8\" id:\"9f396f5b764afa05a4b836367bb590988386206e12916fc6f224ae54f87b0eb8\" pid:3634 exited_at:{seconds:1742506326 nanos:696156104}" Mar 20 21:32:06.707962 containerd[1474]: time="2025-03-20T21:32:06.707923628Z" level=info msg="StartContainer for \"9f396f5b764afa05a4b836367bb590988386206e12916fc6f224ae54f87b0eb8\" returns successfully" Mar 20 21:32:07.552777 kubelet[1803]: E0320 21:32:07.552702 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:32:07.636260 containerd[1474]: time="2025-03-20T21:32:07.636201482Z" level=info msg="CreateContainer within sandbox \"466fb1f9a8cb865648651b49f4f0abc840b3450f48f8ad2a77be53a21bb4e41f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 20 21:32:07.647855 containerd[1474]: time="2025-03-20T21:32:07.647820123Z" level=info msg="Container 64bcd27433c31413efa40cd765b7f4ec644790eff6410eb62f2c2bc59b51946c: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:32:07.652041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3498287314.mount: Deactivated successfully. Mar 20 21:32:07.656931 containerd[1474]: time="2025-03-20T21:32:07.656889924Z" level=info msg="CreateContainer within sandbox \"466fb1f9a8cb865648651b49f4f0abc840b3450f48f8ad2a77be53a21bb4e41f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"64bcd27433c31413efa40cd765b7f4ec644790eff6410eb62f2c2bc59b51946c\"" Mar 20 21:32:07.657449 containerd[1474]: time="2025-03-20T21:32:07.657412536Z" level=info msg="StartContainer for \"64bcd27433c31413efa40cd765b7f4ec644790eff6410eb62f2c2bc59b51946c\"" Mar 20 21:32:07.658347 containerd[1474]: time="2025-03-20T21:32:07.658320532Z" level=info msg="connecting to shim 64bcd27433c31413efa40cd765b7f4ec644790eff6410eb62f2c2bc59b51946c" address="unix:///run/containerd/s/bd78e6913d27f462965af7017e7769e76edf8ee2ce4b0aa60e57a3af7c5fc006" protocol=ttrpc version=3 Mar 20 21:32:07.677738 systemd[1]: Started cri-containerd-64bcd27433c31413efa40cd765b7f4ec644790eff6410eb62f2c2bc59b51946c.scope - libcontainer container 64bcd27433c31413efa40cd765b7f4ec644790eff6410eb62f2c2bc59b51946c. Mar 20 21:32:07.720099 containerd[1474]: time="2025-03-20T21:32:07.720046851Z" level=info msg="StartContainer for \"64bcd27433c31413efa40cd765b7f4ec644790eff6410eb62f2c2bc59b51946c\" returns successfully" Mar 20 21:32:07.787892 containerd[1474]: time="2025-03-20T21:32:07.787836462Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64bcd27433c31413efa40cd765b7f4ec644790eff6410eb62f2c2bc59b51946c\" id:\"c36dc5107e37e36c8c78a9b0ff93f240417826059a429e89630e0dfa80942eb9\" pid:3704 exited_at:{seconds:1742506327 nanos:786629825}" Mar 20 21:32:08.153609 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 20 21:32:08.553336 kubelet[1803]: E0320 21:32:08.553116 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:32:09.553722 kubelet[1803]: E0320 21:32:09.553650 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:32:10.047835 containerd[1474]: time="2025-03-20T21:32:10.047763929Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64bcd27433c31413efa40cd765b7f4ec644790eff6410eb62f2c2bc59b51946c\" id:\"4fdb02f677b737819e7f2946792e3f8d50652beffea6961060af5f496937217e\" pid:3897 exit_status:1 exited_at:{seconds:1742506330 nanos:47350192}" Mar 20 21:32:10.554794 kubelet[1803]: E0320 21:32:10.554741 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:32:11.278645 systemd-networkd[1415]: lxc_health: Link UP Mar 20 21:32:11.282738 systemd-networkd[1415]: lxc_health: Gained carrier Mar 20 21:32:11.555205 kubelet[1803]: E0320 21:32:11.555038 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:32:11.938816 kubelet[1803]: I0320 21:32:11.938752 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rl4pq" podStartSLOduration=8.93873782 podStartE2EDuration="8.93873782s" podCreationTimestamp="2025-03-20 21:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:32:08.65410523 +0000 UTC m=+65.529325765" watchObservedRunningTime="2025-03-20 21:32:11.93873782 +0000 UTC m=+68.813958345" Mar 20 21:32:12.240006 containerd[1474]: time="2025-03-20T21:32:12.239645357Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64bcd27433c31413efa40cd765b7f4ec644790eff6410eb62f2c2bc59b51946c\" id:\"a927e43c77fda4d3305047bde1434c8d15e47cdea73fce3d7cf3b0669be59490\" pid:4275 exited_at:{seconds:1742506332 nanos:239260284}" Mar 20 21:32:12.555844 kubelet[1803]: E0320 21:32:12.555697 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:32:12.582664 systemd-networkd[1415]: lxc_health: Gained IPv6LL Mar 20 21:32:13.556041 kubelet[1803]: E0320 21:32:13.555962 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:32:14.333987 containerd[1474]: time="2025-03-20T21:32:14.333921155Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64bcd27433c31413efa40cd765b7f4ec644790eff6410eb62f2c2bc59b51946c\" id:\"05aafd644868e37c346e2ea2cd1d228414beab26b691f8e75f1c4e94b7ca0c64\" pid:4306 exited_at:{seconds:1742506334 nanos:333462645}" Mar 20 21:32:14.556146 kubelet[1803]: E0320 21:32:14.556088 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:32:15.557259 kubelet[1803]: E0320 21:32:15.557175 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:32:16.425604 containerd[1474]: time="2025-03-20T21:32:16.425536266Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64bcd27433c31413efa40cd765b7f4ec644790eff6410eb62f2c2bc59b51946c\" id:\"aeccd86b79315e37c357984c04f5e9c30cabd30c915aa0b21943ff4fd73e8f93\" pid:4339 exited_at:{seconds:1742506336 nanos:425081402}" Mar 20 21:32:16.558332 kubelet[1803]: E0320 21:32:16.558269 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 20 21:32:17.559360 kubelet[1803]: E0320 21:32:17.559290 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"