Mar 19 13:01:33.927436 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Mar 19 10:13:43 -00 2025 Mar 19 13:01:33.927465 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=08c32ef14ad6302a92b1d281c48443f5b56d59f0d37d38df628e5b6f012967bc Mar 19 13:01:33.927479 kernel: BIOS-provided physical RAM map: Mar 19 13:01:33.927488 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 19 13:01:33.927496 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 19 13:01:33.927505 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 19 13:01:33.927516 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Mar 19 13:01:33.927525 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Mar 19 13:01:33.927536 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 19 13:01:33.927544 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 19 13:01:33.927553 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 19 13:01:33.927562 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 19 13:01:33.927570 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 19 13:01:33.927579 kernel: NX (Execute Disable) protection: active Mar 19 13:01:33.927592 kernel: APIC: Static calls initialized Mar 19 13:01:33.927602 kernel: SMBIOS 3.0.0 present. Mar 19 13:01:33.927611 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Mar 19 13:01:33.927621 kernel: Hypervisor detected: KVM Mar 19 13:01:33.927630 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 19 13:01:33.927640 kernel: kvm-clock: using sched offset of 3579232960 cycles Mar 19 13:01:33.927650 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 19 13:01:33.927660 kernel: tsc: Detected 2495.312 MHz processor Mar 19 13:01:33.927670 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 19 13:01:33.927681 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 19 13:01:33.927692 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Mar 19 13:01:33.927702 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 19 13:01:33.927712 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 19 13:01:33.927722 kernel: Using GB pages for direct mapping Mar 19 13:01:33.927731 kernel: ACPI: Early table checksum verification disabled Mar 19 13:01:33.927741 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Mar 19 13:01:33.927751 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 13:01:33.927761 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 13:01:33.927772 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 13:01:33.927782 kernel: ACPI: FACS 0x000000007CFE0000 000040 Mar 19 13:01:33.927792 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 13:01:33.927801 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 13:01:33.927811 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 13:01:33.927821 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 13:01:33.927831 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Mar 19 13:01:33.927841 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Mar 19 13:01:33.927856 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Mar 19 13:01:33.927866 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Mar 19 13:01:33.927891 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Mar 19 13:01:33.927901 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Mar 19 13:01:33.927912 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Mar 19 13:01:33.927922 kernel: No NUMA configuration found Mar 19 13:01:33.927934 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Mar 19 13:01:33.927992 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Mar 19 13:01:33.928011 kernel: Zone ranges: Mar 19 13:01:33.928021 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 19 13:01:33.928031 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Mar 19 13:01:33.928040 kernel: Normal empty Mar 19 13:01:33.928051 kernel: Movable zone start for each node Mar 19 13:01:33.928061 kernel: Early memory node ranges Mar 19 13:01:33.928071 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 19 13:01:33.928081 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Mar 19 13:01:33.928095 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Mar 19 13:01:33.928105 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 19 13:01:33.928114 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 19 13:01:33.928124 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 19 13:01:33.928134 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 19 13:01:33.928144 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 19 13:01:33.928154 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 19 13:01:33.928164 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 19 13:01:33.928174 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 19 13:01:33.928186 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 19 13:01:33.928196 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 19 13:01:33.928206 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 19 13:01:33.928216 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 19 13:01:33.928227 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 19 13:01:33.928237 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 19 13:01:33.928247 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 19 13:01:33.928257 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 19 13:01:33.928268 kernel: Booting paravirtualized kernel on KVM Mar 19 13:01:33.928280 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 19 13:01:33.928291 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 19 13:01:33.928301 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 19 13:01:33.928311 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 19 13:01:33.928322 kernel: pcpu-alloc: [0] 0 1 Mar 19 13:01:33.928332 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 19 13:01:33.928343 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=08c32ef14ad6302a92b1d281c48443f5b56d59f0d37d38df628e5b6f012967bc Mar 19 13:01:33.928354 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 19 13:01:33.928366 kernel: random: crng init done Mar 19 13:01:33.928377 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 19 13:01:33.928387 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 19 13:01:33.928397 kernel: Fallback order for Node 0: 0 Mar 19 13:01:33.928407 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Mar 19 13:01:33.928417 kernel: Policy zone: DMA32 Mar 19 13:01:33.928428 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 19 13:01:33.928438 kernel: Memory: 1920004K/2047464K available (14336K kernel code, 2303K rwdata, 22860K rodata, 43480K init, 1592K bss, 127200K reserved, 0K cma-reserved) Mar 19 13:01:33.928448 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 19 13:01:33.928460 kernel: ftrace: allocating 37910 entries in 149 pages Mar 19 13:01:33.928470 kernel: ftrace: allocated 149 pages with 4 groups Mar 19 13:01:33.928480 kernel: Dynamic Preempt: voluntary Mar 19 13:01:33.928490 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 19 13:01:33.928501 kernel: rcu: RCU event tracing is enabled. Mar 19 13:01:33.928511 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 19 13:01:33.928522 kernel: Trampoline variant of Tasks RCU enabled. Mar 19 13:01:33.928532 kernel: Rude variant of Tasks RCU enabled. Mar 19 13:01:33.928542 kernel: Tracing variant of Tasks RCU enabled. Mar 19 13:01:33.928554 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 19 13:01:33.928565 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 19 13:01:33.928575 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 19 13:01:33.928586 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 19 13:01:33.928596 kernel: Console: colour VGA+ 80x25 Mar 19 13:01:33.928606 kernel: printk: console [tty0] enabled Mar 19 13:01:33.928616 kernel: printk: console [ttyS0] enabled Mar 19 13:01:33.928627 kernel: ACPI: Core revision 20230628 Mar 19 13:01:33.928637 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 19 13:01:33.928650 kernel: APIC: Switch to symmetric I/O mode setup Mar 19 13:01:33.928660 kernel: x2apic enabled Mar 19 13:01:33.928671 kernel: APIC: Switched APIC routing to: physical x2apic Mar 19 13:01:33.928681 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 19 13:01:33.928691 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 19 13:01:33.928702 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495312) Mar 19 13:01:33.928712 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 19 13:01:33.928723 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 19 13:01:33.928734 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 19 13:01:33.928753 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 19 13:01:33.928763 kernel: Spectre V2 : Mitigation: Retpolines Mar 19 13:01:33.928775 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 19 13:01:33.928788 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 19 13:01:33.928799 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 19 13:01:33.928809 kernel: RETBleed: Mitigation: untrained return thunk Mar 19 13:01:33.928819 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 19 13:01:33.928829 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 19 13:01:33.928840 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 19 13:01:33.928853 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 19 13:01:33.928864 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 19 13:01:33.928890 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 19 13:01:33.928901 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 19 13:01:33.928913 kernel: Freeing SMP alternatives memory: 32K Mar 19 13:01:33.928923 kernel: pid_max: default: 32768 minimum: 301 Mar 19 13:01:33.928934 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 19 13:01:33.928960 kernel: landlock: Up and running. Mar 19 13:01:33.928971 kernel: SELinux: Initializing. Mar 19 13:01:33.928982 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 19 13:01:33.928992 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 19 13:01:33.929003 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 19 13:01:33.929014 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 19 13:01:33.929024 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 19 13:01:33.929036 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 19 13:01:33.929046 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 19 13:01:33.929060 kernel: ... version: 0 Mar 19 13:01:33.929070 kernel: ... bit width: 48 Mar 19 13:01:33.929080 kernel: ... generic registers: 6 Mar 19 13:01:33.929091 kernel: ... value mask: 0000ffffffffffff Mar 19 13:01:33.929102 kernel: ... max period: 00007fffffffffff Mar 19 13:01:33.929113 kernel: ... fixed-purpose events: 0 Mar 19 13:01:33.929124 kernel: ... event mask: 000000000000003f Mar 19 13:01:33.929134 kernel: signal: max sigframe size: 1776 Mar 19 13:01:33.929146 kernel: rcu: Hierarchical SRCU implementation. Mar 19 13:01:33.929159 kernel: rcu: Max phase no-delay instances is 400. Mar 19 13:01:33.929169 kernel: smp: Bringing up secondary CPUs ... Mar 19 13:01:33.929180 kernel: smpboot: x86: Booting SMP configuration: Mar 19 13:01:33.929191 kernel: .... node #0, CPUs: #1 Mar 19 13:01:33.929202 kernel: smp: Brought up 1 node, 2 CPUs Mar 19 13:01:33.929212 kernel: smpboot: Max logical packages: 1 Mar 19 13:01:33.929223 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Mar 19 13:01:33.929234 kernel: devtmpfs: initialized Mar 19 13:01:33.929244 kernel: x86/mm: Memory block size: 128MB Mar 19 13:01:33.929255 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 19 13:01:33.929268 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 19 13:01:33.929279 kernel: pinctrl core: initialized pinctrl subsystem Mar 19 13:01:33.929289 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 19 13:01:33.929300 kernel: audit: initializing netlink subsys (disabled) Mar 19 13:01:33.929311 kernel: audit: type=2000 audit(1742389292.940:1): state=initialized audit_enabled=0 res=1 Mar 19 13:01:33.929322 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 19 13:01:33.929332 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 19 13:01:33.929343 kernel: cpuidle: using governor menu Mar 19 13:01:33.929354 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 19 13:01:33.929367 kernel: dca service started, version 1.12.1 Mar 19 13:01:33.929378 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 19 13:01:33.929389 kernel: PCI: Using configuration type 1 for base access Mar 19 13:01:33.929400 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 19 13:01:33.929411 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 19 13:01:33.929422 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 19 13:01:33.929432 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 19 13:01:33.929443 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 19 13:01:33.929454 kernel: ACPI: Added _OSI(Module Device) Mar 19 13:01:33.929467 kernel: ACPI: Added _OSI(Processor Device) Mar 19 13:01:33.929478 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 19 13:01:33.929489 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 19 13:01:33.929500 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 19 13:01:33.929511 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 19 13:01:33.929521 kernel: ACPI: Interpreter enabled Mar 19 13:01:33.929532 kernel: ACPI: PM: (supports S0 S5) Mar 19 13:01:33.929543 kernel: ACPI: Using IOAPIC for interrupt routing Mar 19 13:01:33.929554 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 19 13:01:33.929567 kernel: PCI: Using E820 reservations for host bridge windows Mar 19 13:01:33.929578 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 19 13:01:33.929589 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 19 13:01:33.929772 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 19 13:01:33.929893 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 19 13:01:33.931362 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 19 13:01:33.931382 kernel: PCI host bridge to bus 0000:00 Mar 19 13:01:33.931498 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 19 13:01:33.931597 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 19 13:01:33.931699 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 19 13:01:33.931798 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Mar 19 13:01:33.931919 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 19 13:01:33.932058 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 19 13:01:33.932159 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 19 13:01:33.932294 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 19 13:01:33.932415 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Mar 19 13:01:33.932515 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Mar 19 13:01:33.932613 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Mar 19 13:01:33.932709 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Mar 19 13:01:33.932807 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Mar 19 13:01:33.932928 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 19 13:01:33.934487 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 19 13:01:33.934592 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Mar 19 13:01:33.934697 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 19 13:01:33.934796 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Mar 19 13:01:33.934914 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 19 13:01:33.935052 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Mar 19 13:01:33.935161 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 19 13:01:33.935266 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Mar 19 13:01:33.935371 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 19 13:01:33.935470 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Mar 19 13:01:33.935578 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 19 13:01:33.935682 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Mar 19 13:01:33.935787 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 19 13:01:33.935904 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Mar 19 13:01:33.938053 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 19 13:01:33.938160 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Mar 19 13:01:33.938272 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 19 13:01:33.938369 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Mar 19 13:01:33.938476 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 19 13:01:33.938572 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 19 13:01:33.938772 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 19 13:01:33.938891 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Mar 19 13:01:33.939043 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Mar 19 13:01:33.939151 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 19 13:01:33.939257 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 19 13:01:33.939367 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 19 13:01:33.939473 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Mar 19 13:01:33.939578 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Mar 19 13:01:33.939683 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Mar 19 13:01:33.939785 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 19 13:01:33.939901 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Mar 19 13:01:33.940974 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Mar 19 13:01:33.941109 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 19 13:01:33.941229 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Mar 19 13:01:33.941338 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 19 13:01:33.941444 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Mar 19 13:01:33.941553 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 19 13:01:33.941684 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 19 13:01:33.941799 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Mar 19 13:01:33.941931 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Mar 19 13:01:33.942095 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 19 13:01:33.942203 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Mar 19 13:01:33.942314 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 19 13:01:33.942434 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 19 13:01:33.942553 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Mar 19 13:01:33.942663 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 19 13:01:33.942769 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Mar 19 13:01:33.942888 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 19 13:01:33.943045 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 19 13:01:33.943161 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Mar 19 13:01:33.943279 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Mar 19 13:01:33.943390 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 19 13:01:33.943506 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Mar 19 13:01:33.943614 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 19 13:01:33.943735 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 19 13:01:33.943851 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Mar 19 13:01:33.945049 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Mar 19 13:01:33.945164 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 19 13:01:33.945270 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Mar 19 13:01:33.945379 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 19 13:01:33.945394 kernel: acpiphp: Slot [0] registered Mar 19 13:01:33.945509 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 19 13:01:33.945620 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Mar 19 13:01:33.945730 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Mar 19 13:01:33.945838 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Mar 19 13:01:33.946997 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 19 13:01:33.948132 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Mar 19 13:01:33.948250 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 19 13:01:33.948265 kernel: acpiphp: Slot [0-2] registered Mar 19 13:01:33.948370 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 19 13:01:33.948479 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Mar 19 13:01:33.948589 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 19 13:01:33.948605 kernel: acpiphp: Slot [0-3] registered Mar 19 13:01:33.948711 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 19 13:01:33.948812 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 19 13:01:33.948933 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 19 13:01:33.948963 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 19 13:01:33.948974 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 19 13:01:33.948985 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 19 13:01:33.948995 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 19 13:01:33.949006 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 19 13:01:33.949016 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 19 13:01:33.949026 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 19 13:01:33.949037 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 19 13:01:33.949051 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 19 13:01:33.949061 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 19 13:01:33.949071 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 19 13:01:33.949082 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 19 13:01:33.949093 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 19 13:01:33.949104 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 19 13:01:33.949114 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 19 13:01:33.949124 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 19 13:01:33.949135 kernel: iommu: Default domain type: Translated Mar 19 13:01:33.949148 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 19 13:01:33.949159 kernel: PCI: Using ACPI for IRQ routing Mar 19 13:01:33.949169 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 19 13:01:33.949180 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 19 13:01:33.949190 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Mar 19 13:01:33.949306 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 19 13:01:33.949412 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 19 13:01:33.949518 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 19 13:01:33.949533 kernel: vgaarb: loaded Mar 19 13:01:33.949547 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 19 13:01:33.949558 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 19 13:01:33.949569 kernel: clocksource: Switched to clocksource kvm-clock Mar 19 13:01:33.949580 kernel: VFS: Disk quotas dquot_6.6.0 Mar 19 13:01:33.949591 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 19 13:01:33.949601 kernel: pnp: PnP ACPI init Mar 19 13:01:33.949712 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 19 13:01:33.949729 kernel: pnp: PnP ACPI: found 5 devices Mar 19 13:01:33.949744 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 19 13:01:33.949754 kernel: NET: Registered PF_INET protocol family Mar 19 13:01:33.949765 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 19 13:01:33.949776 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 19 13:01:33.949786 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 19 13:01:33.949797 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 19 13:01:33.949807 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 19 13:01:33.949818 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 19 13:01:33.949829 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 19 13:01:33.949841 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 19 13:01:33.949852 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 19 13:01:33.949863 kernel: NET: Registered PF_XDP protocol family Mar 19 13:01:33.952023 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 19 13:01:33.952137 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 19 13:01:33.952244 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 19 13:01:33.952349 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Mar 19 13:01:33.952458 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Mar 19 13:01:33.952561 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Mar 19 13:01:33.952666 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 19 13:01:33.952770 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Mar 19 13:01:33.952890 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Mar 19 13:01:33.954036 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 19 13:01:33.954144 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Mar 19 13:01:33.954245 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 19 13:01:33.954354 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 19 13:01:33.954460 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Mar 19 13:01:33.954567 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 19 13:01:33.954677 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 19 13:01:33.954782 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Mar 19 13:01:33.954901 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 19 13:01:33.956055 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 19 13:01:33.956171 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Mar 19 13:01:33.956291 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 19 13:01:33.956396 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 19 13:01:33.956499 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Mar 19 13:01:33.956601 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 19 13:01:33.956703 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 19 13:01:33.956806 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Mar 19 13:01:33.956924 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Mar 19 13:01:33.957047 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 19 13:01:33.957150 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 19 13:01:33.957259 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Mar 19 13:01:33.957363 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Mar 19 13:01:33.957467 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 19 13:01:33.957570 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 19 13:01:33.957679 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Mar 19 13:01:33.957782 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 19 13:01:33.957905 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 19 13:01:33.960043 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 19 13:01:33.960143 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 19 13:01:33.960240 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 19 13:01:33.960335 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Mar 19 13:01:33.960426 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 19 13:01:33.960516 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 19 13:01:33.960620 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 19 13:01:33.960717 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Mar 19 13:01:33.960829 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 19 13:01:33.961959 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 19 13:01:33.962060 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 19 13:01:33.962130 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 19 13:01:33.962204 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 19 13:01:33.962271 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 19 13:01:33.962343 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 19 13:01:33.962410 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 19 13:01:33.962486 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Mar 19 13:01:33.962554 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 19 13:01:33.962626 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Mar 19 13:01:33.962693 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 19 13:01:33.962761 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 19 13:01:33.962833 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Mar 19 13:01:33.962914 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Mar 19 13:01:33.964015 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 19 13:01:33.964093 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Mar 19 13:01:33.964161 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 19 13:01:33.964228 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 19 13:01:33.964239 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 19 13:01:33.964248 kernel: PCI: CLS 0 bytes, default 64 Mar 19 13:01:33.964256 kernel: Initialise system trusted keyrings Mar 19 13:01:33.964267 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 19 13:01:33.964275 kernel: Key type asymmetric registered Mar 19 13:01:33.964283 kernel: Asymmetric key parser 'x509' registered Mar 19 13:01:33.964291 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 19 13:01:33.964299 kernel: io scheduler mq-deadline registered Mar 19 13:01:33.964307 kernel: io scheduler kyber registered Mar 19 13:01:33.964314 kernel: io scheduler bfq registered Mar 19 13:01:33.964389 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 19 13:01:33.964463 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 19 13:01:33.964539 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 19 13:01:33.964611 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 19 13:01:33.964684 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 19 13:01:33.964755 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 19 13:01:33.964828 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 19 13:01:33.964925 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 19 13:01:33.965017 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 19 13:01:33.965118 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 19 13:01:33.965240 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 19 13:01:33.965331 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 19 13:01:33.965406 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 19 13:01:33.965479 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 19 13:01:33.965578 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 19 13:01:33.965657 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 19 13:01:33.965669 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 19 13:01:33.965740 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Mar 19 13:01:33.965834 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Mar 19 13:01:33.965855 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 19 13:01:33.965867 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Mar 19 13:01:33.965892 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 19 13:01:33.965904 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 19 13:01:33.965917 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 19 13:01:33.965928 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 19 13:01:33.965938 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 19 13:01:33.971332 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 19 13:01:33.971432 kernel: rtc_cmos 00:03: registered as rtc0 Mar 19 13:01:33.971448 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 19 13:01:33.971543 kernel: rtc_cmos 00:03: setting system clock to 2025-03-19T13:01:33 UTC (1742389293) Mar 19 13:01:33.971642 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 19 13:01:33.971658 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 19 13:01:33.971670 kernel: NET: Registered PF_INET6 protocol family Mar 19 13:01:33.971680 kernel: Segment Routing with IPv6 Mar 19 13:01:33.971690 kernel: In-situ OAM (IOAM) with IPv6 Mar 19 13:01:33.971701 kernel: NET: Registered PF_PACKET protocol family Mar 19 13:01:33.971709 kernel: Key type dns_resolver registered Mar 19 13:01:33.971717 kernel: IPI shorthand broadcast: enabled Mar 19 13:01:33.971725 kernel: sched_clock: Marking stable (1250006480, 164476414)->(1458671166, -44188272) Mar 19 13:01:33.971733 kernel: registered taskstats version 1 Mar 19 13:01:33.971741 kernel: Loading compiled-in X.509 certificates Mar 19 13:01:33.971749 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: ea8d6696bd19c98b32173a761210456cdad6b56b' Mar 19 13:01:33.971757 kernel: Key type .fscrypt registered Mar 19 13:01:33.971765 kernel: Key type fscrypt-provisioning registered Mar 19 13:01:33.971774 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 19 13:01:33.971782 kernel: ima: Allocated hash algorithm: sha1 Mar 19 13:01:33.971790 kernel: ima: No architecture policies found Mar 19 13:01:33.971797 kernel: clk: Disabling unused clocks Mar 19 13:01:33.971805 kernel: Freeing unused kernel image (initmem) memory: 43480K Mar 19 13:01:33.971813 kernel: Write protecting the kernel read-only data: 38912k Mar 19 13:01:33.971821 kernel: Freeing unused kernel image (rodata/data gap) memory: 1716K Mar 19 13:01:33.971829 kernel: Run /init as init process Mar 19 13:01:33.971837 kernel: with arguments: Mar 19 13:01:33.971846 kernel: /init Mar 19 13:01:33.971853 kernel: with environment: Mar 19 13:01:33.971917 kernel: HOME=/ Mar 19 13:01:33.971928 kernel: TERM=linux Mar 19 13:01:33.971939 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 19 13:01:33.971971 systemd[1]: Successfully made /usr/ read-only. Mar 19 13:01:33.971987 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 13:01:33.972000 systemd[1]: Detected virtualization kvm. Mar 19 13:01:33.972015 systemd[1]: Detected architecture x86-64. Mar 19 13:01:33.972026 systemd[1]: Running in initrd. Mar 19 13:01:33.972038 systemd[1]: No hostname configured, using default hostname. Mar 19 13:01:33.972050 systemd[1]: Hostname set to . Mar 19 13:01:33.972061 systemd[1]: Initializing machine ID from VM UUID. Mar 19 13:01:33.972073 systemd[1]: Queued start job for default target initrd.target. Mar 19 13:01:33.972084 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 13:01:33.972096 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 13:01:33.972111 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 19 13:01:33.972123 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 13:01:33.972135 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 19 13:01:33.972147 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 19 13:01:33.972159 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 19 13:01:33.972168 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 19 13:01:33.972176 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 13:01:33.972187 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 13:01:33.972195 systemd[1]: Reached target paths.target - Path Units. Mar 19 13:01:33.972203 systemd[1]: Reached target slices.target - Slice Units. Mar 19 13:01:33.972211 systemd[1]: Reached target swap.target - Swaps. Mar 19 13:01:33.972220 systemd[1]: Reached target timers.target - Timer Units. Mar 19 13:01:33.972228 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 13:01:33.972236 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 13:01:33.972244 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 19 13:01:33.972254 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 19 13:01:33.972262 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 13:01:33.972270 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 13:01:33.972278 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 13:01:33.972286 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 13:01:33.972294 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 19 13:01:33.972302 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 13:01:33.972311 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 19 13:01:33.972320 systemd[1]: Starting systemd-fsck-usr.service... Mar 19 13:01:33.972333 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 13:01:33.972345 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 13:01:33.972358 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 13:01:33.972369 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 19 13:01:33.972411 systemd-journald[188]: Collecting audit messages is disabled. Mar 19 13:01:33.972444 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 13:01:33.972456 systemd[1]: Finished systemd-fsck-usr.service. Mar 19 13:01:33.972468 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 13:01:33.972483 systemd-journald[188]: Journal started Mar 19 13:01:33.972509 systemd-journald[188]: Runtime Journal (/run/log/journal/a22ba62815fc4bc88f789d390cc7cc96) is 4.8M, max 38.3M, 33.5M free. Mar 19 13:01:33.944156 systemd-modules-load[189]: Inserted module 'overlay' Mar 19 13:01:33.991821 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 19 13:01:33.991844 kernel: Bridge firewalling registered Mar 19 13:01:33.984156 systemd-modules-load[189]: Inserted module 'br_netfilter' Mar 19 13:01:33.996960 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 13:01:33.997153 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 13:01:33.998457 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 13:01:33.999083 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 13:01:34.005136 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 13:01:34.009117 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 13:01:34.012129 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 13:01:34.016386 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 13:01:34.020009 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 13:01:34.025640 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 13:01:34.029823 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 19 13:01:34.033610 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 13:01:34.036451 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 13:01:34.044102 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 13:01:34.045332 dracut-cmdline[221]: dracut-dracut-053 Mar 19 13:01:34.047392 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=08c32ef14ad6302a92b1d281c48443f5b56d59f0d37d38df628e5b6f012967bc Mar 19 13:01:34.076349 systemd-resolved[228]: Positive Trust Anchors: Mar 19 13:01:34.077049 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 13:01:34.077082 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 13:01:34.085755 systemd-resolved[228]: Defaulting to hostname 'linux'. Mar 19 13:01:34.086642 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 13:01:34.087360 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 13:01:34.103974 kernel: SCSI subsystem initialized Mar 19 13:01:34.113980 kernel: Loading iSCSI transport class v2.0-870. Mar 19 13:01:34.123975 kernel: iscsi: registered transport (tcp) Mar 19 13:01:34.152708 kernel: iscsi: registered transport (qla4xxx) Mar 19 13:01:34.152800 kernel: QLogic iSCSI HBA Driver Mar 19 13:01:34.187870 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 19 13:01:34.195179 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 19 13:01:34.226233 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 19 13:01:34.226323 kernel: device-mapper: uevent: version 1.0.3 Mar 19 13:01:34.228114 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 19 13:01:34.276032 kernel: raid6: avx2x4 gen() 27788 MB/s Mar 19 13:01:34.294004 kernel: raid6: avx2x2 gen() 23707 MB/s Mar 19 13:01:34.311326 kernel: raid6: avx2x1 gen() 23542 MB/s Mar 19 13:01:34.311402 kernel: raid6: using algorithm avx2x4 gen() 27788 MB/s Mar 19 13:01:34.331026 kernel: raid6: .... xor() 6924 MB/s, rmw enabled Mar 19 13:01:34.331111 kernel: raid6: using avx2x2 recovery algorithm Mar 19 13:01:34.352017 kernel: xor: automatically using best checksumming function avx Mar 19 13:01:34.496033 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 19 13:01:34.509620 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 19 13:01:34.515184 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 13:01:34.528143 systemd-udevd[408]: Using default interface naming scheme 'v255'. Mar 19 13:01:34.532473 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 13:01:34.541691 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 19 13:01:34.564671 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Mar 19 13:01:34.593429 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 13:01:34.602198 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 13:01:34.653744 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 13:01:34.659121 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 19 13:01:34.683491 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 19 13:01:34.687457 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 13:01:34.689404 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 13:01:34.690670 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 13:01:34.697162 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 19 13:01:34.710866 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 19 13:01:34.735766 kernel: scsi host0: Virtio SCSI HBA Mar 19 13:01:34.760986 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 19 13:01:34.761148 kernel: cryptd: max_cpu_qlen set to 1000 Mar 19 13:01:34.799186 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 13:01:34.801054 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 13:01:34.802412 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 13:01:34.803664 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 13:01:34.805071 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 13:01:34.808134 kernel: libata version 3.00 loaded. Mar 19 13:01:34.808234 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 13:01:34.815331 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 13:01:34.826978 kernel: AVX2 version of gcm_enc/dec engaged. Mar 19 13:01:34.828957 kernel: ACPI: bus type USB registered Mar 19 13:01:34.828981 kernel: AES CTR mode by8 optimization enabled Mar 19 13:01:34.834971 kernel: usbcore: registered new interface driver usbfs Mar 19 13:01:34.842972 kernel: usbcore: registered new interface driver hub Mar 19 13:01:34.843037 kernel: usbcore: registered new device driver usb Mar 19 13:01:34.856044 kernel: ahci 0000:00:1f.2: version 3.0 Mar 19 13:01:34.865920 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 19 13:01:34.865936 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 19 13:01:34.866062 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 19 13:01:34.866155 kernel: scsi host1: ahci Mar 19 13:01:34.866250 kernel: scsi host2: ahci Mar 19 13:01:34.866348 kernel: scsi host3: ahci Mar 19 13:01:34.866436 kernel: scsi host4: ahci Mar 19 13:01:34.866522 kernel: scsi host5: ahci Mar 19 13:01:34.866608 kernel: scsi host6: ahci Mar 19 13:01:34.866697 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Mar 19 13:01:34.866707 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Mar 19 13:01:34.866717 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Mar 19 13:01:34.866729 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Mar 19 13:01:34.866738 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Mar 19 13:01:34.866748 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Mar 19 13:01:34.902798 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 13:01:34.908118 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 13:01:34.918391 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 13:01:35.180897 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 19 13:01:35.181019 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 19 13:01:35.185996 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 19 13:01:35.186059 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 19 13:01:35.186081 kernel: ata1.00: applying bridge limits Mar 19 13:01:35.193326 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 19 13:01:35.193412 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 19 13:01:35.193433 kernel: ata1.00: configured for UDMA/100 Mar 19 13:01:35.198089 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 19 13:01:35.199635 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 19 13:01:35.233282 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 19 13:01:35.300905 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 19 13:01:35.301119 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 19 13:01:35.301298 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 19 13:01:35.301468 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 19 13:01:35.301628 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 19 13:01:35.301790 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 19 13:01:35.301985 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 19 13:01:35.302156 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 19 13:01:35.302312 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 19 13:01:35.302476 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 19 13:01:35.302636 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 19 13:01:35.302654 kernel: GPT:17805311 != 80003071 Mar 19 13:01:35.302669 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 19 13:01:35.302685 kernel: GPT:17805311 != 80003071 Mar 19 13:01:35.302699 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 19 13:01:35.302714 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 13:01:35.302730 kernel: hub 1-0:1.0: USB hub found Mar 19 13:01:35.302918 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 19 13:01:35.303318 kernel: hub 1-0:1.0: 4 ports detected Mar 19 13:01:35.303479 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 19 13:01:35.303654 kernel: hub 2-0:1.0: USB hub found Mar 19 13:01:35.303815 kernel: hub 2-0:1.0: 4 ports detected Mar 19 13:01:35.312436 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 19 13:01:35.327446 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 19 13:01:35.327468 kernel: BTRFS: device fsid 8d57424d-5abc-4888-810f-658d040a58e4 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (467) Mar 19 13:01:35.327491 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Mar 19 13:01:35.345986 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (461) Mar 19 13:01:35.363042 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 19 13:01:35.371752 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 19 13:01:35.373123 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 19 13:01:35.382751 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 19 13:01:35.393360 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 19 13:01:35.399328 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 19 13:01:35.404578 disk-uuid[581]: Primary Header is updated. Mar 19 13:01:35.404578 disk-uuid[581]: Secondary Entries is updated. Mar 19 13:01:35.404578 disk-uuid[581]: Secondary Header is updated. Mar 19 13:01:35.415129 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 13:01:35.528086 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 19 13:01:35.665981 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 19 13:01:35.671517 kernel: usbcore: registered new interface driver usbhid Mar 19 13:01:35.671611 kernel: usbhid: USB HID core driver Mar 19 13:01:35.679039 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Mar 19 13:01:35.679101 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 19 13:01:36.431049 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 13:01:36.431134 disk-uuid[583]: The operation has completed successfully. Mar 19 13:01:36.496368 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 19 13:01:36.496476 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 19 13:01:36.555150 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 19 13:01:36.557835 sh[599]: Success Mar 19 13:01:36.571043 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 19 13:01:36.631637 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 19 13:01:36.645431 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 19 13:01:36.646895 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 19 13:01:36.664423 kernel: BTRFS info (device dm-0): first mount of filesystem 8d57424d-5abc-4888-810f-658d040a58e4 Mar 19 13:01:36.664473 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 19 13:01:36.666274 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 19 13:01:36.668163 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 19 13:01:36.670483 kernel: BTRFS info (device dm-0): using free space tree Mar 19 13:01:36.679991 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 19 13:01:36.682324 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 19 13:01:36.684139 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 19 13:01:36.691179 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 19 13:01:36.696967 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 19 13:01:36.714654 kernel: BTRFS info (device sda6): first mount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 13:01:36.714721 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 19 13:01:36.714731 kernel: BTRFS info (device sda6): using free space tree Mar 19 13:01:36.719750 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 19 13:01:36.719803 kernel: BTRFS info (device sda6): auto enabling async discard Mar 19 13:01:36.728670 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 19 13:01:36.731527 kernel: BTRFS info (device sda6): last unmount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 13:01:36.736669 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 19 13:01:36.744219 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 19 13:01:36.786894 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 13:01:36.797386 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 13:01:36.830597 systemd-networkd[781]: lo: Link UP Mar 19 13:01:36.830605 systemd-networkd[781]: lo: Gained carrier Mar 19 13:01:36.832738 systemd-networkd[781]: Enumeration completed Mar 19 13:01:36.832848 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 13:01:36.834696 ignition[722]: Ignition 2.20.0 Mar 19 13:01:36.834045 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 13:01:36.834708 ignition[722]: Stage: fetch-offline Mar 19 13:01:36.834049 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 13:01:36.834759 ignition[722]: no configs at "/usr/lib/ignition/base.d" Mar 19 13:01:36.835145 systemd[1]: Reached target network.target - Network. Mar 19 13:01:36.834769 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 19 13:01:36.836451 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 13:01:36.834851 ignition[722]: parsed url from cmdline: "" Mar 19 13:01:36.836455 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 13:01:36.834854 ignition[722]: no config URL provided Mar 19 13:01:36.838403 systemd-networkd[781]: eth0: Link UP Mar 19 13:01:36.834858 ignition[722]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 13:01:36.838406 systemd-networkd[781]: eth0: Gained carrier Mar 19 13:01:36.834864 ignition[722]: no config at "/usr/lib/ignition/user.ign" Mar 19 13:01:36.838414 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 13:01:36.834868 ignition[722]: failed to fetch config: resource requires networking Mar 19 13:01:36.838689 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 13:01:36.835893 ignition[722]: Ignition finished successfully Mar 19 13:01:36.844822 systemd-networkd[781]: eth1: Link UP Mar 19 13:01:36.844826 systemd-networkd[781]: eth1: Gained carrier Mar 19 13:01:36.844839 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 13:01:36.847103 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 19 13:01:36.858608 ignition[790]: Ignition 2.20.0 Mar 19 13:01:36.858622 ignition[790]: Stage: fetch Mar 19 13:01:36.858797 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 19 13:01:36.858823 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 19 13:01:36.858911 ignition[790]: parsed url from cmdline: "" Mar 19 13:01:36.858913 ignition[790]: no config URL provided Mar 19 13:01:36.858917 ignition[790]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 13:01:36.858923 ignition[790]: no config at "/usr/lib/ignition/user.ign" Mar 19 13:01:36.858942 ignition[790]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 19 13:01:36.859091 ignition[790]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 19 13:01:36.885126 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 19 13:01:36.904044 systemd-networkd[781]: eth0: DHCPv4 address 157.180.44.40/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 19 13:01:37.059363 ignition[790]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 19 13:01:37.064680 ignition[790]: GET result: OK Mar 19 13:01:37.064742 ignition[790]: parsing config with SHA512: be8afd9ef687b2b190ad479fabe9bf9cbf79b5158e93a8536da9a44c0299ae679080f990fdaf10156439a818a8e89c2fc997c3631a8eda2d5bc093dedee28fb8 Mar 19 13:01:37.068010 unknown[790]: fetched base config from "system" Mar 19 13:01:37.068290 ignition[790]: fetch: fetch complete Mar 19 13:01:37.068021 unknown[790]: fetched base config from "system" Mar 19 13:01:37.068295 ignition[790]: fetch: fetch passed Mar 19 13:01:37.068027 unknown[790]: fetched user config from "hetzner" Mar 19 13:01:37.068344 ignition[790]: Ignition finished successfully Mar 19 13:01:37.070766 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 19 13:01:37.078321 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 19 13:01:37.106873 ignition[798]: Ignition 2.20.0 Mar 19 13:01:37.106921 ignition[798]: Stage: kargs Mar 19 13:01:37.107343 ignition[798]: no configs at "/usr/lib/ignition/base.d" Mar 19 13:01:37.107367 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 19 13:01:37.109064 ignition[798]: kargs: kargs passed Mar 19 13:01:37.110782 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 19 13:01:37.109148 ignition[798]: Ignition finished successfully Mar 19 13:01:37.120318 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 19 13:01:37.158460 ignition[804]: Ignition 2.20.0 Mar 19 13:01:37.158481 ignition[804]: Stage: disks Mar 19 13:01:37.158803 ignition[804]: no configs at "/usr/lib/ignition/base.d" Mar 19 13:01:37.158821 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 19 13:01:37.160270 ignition[804]: disks: disks passed Mar 19 13:01:37.161840 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 19 13:01:37.160341 ignition[804]: Ignition finished successfully Mar 19 13:01:37.164290 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 19 13:01:37.165420 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 19 13:01:37.167332 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 13:01:37.169086 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 13:01:37.171134 systemd[1]: Reached target basic.target - Basic System. Mar 19 13:01:37.179228 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 19 13:01:37.205483 systemd-fsck[813]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 19 13:01:37.208355 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 19 13:01:37.672296 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 19 13:01:37.793979 kernel: EXT4-fs (sda9): mounted filesystem 303a73dd-e104-408b-9302-bf91b04ba1ca r/w with ordered data mode. Quota mode: none. Mar 19 13:01:37.794411 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 19 13:01:37.795318 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 19 13:01:37.801094 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 13:01:37.805027 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 19 13:01:37.807577 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 19 13:01:37.809443 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 19 13:01:37.809492 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 13:01:37.816918 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 19 13:01:37.818783 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (821) Mar 19 13:01:37.818810 kernel: BTRFS info (device sda6): first mount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 13:01:37.822507 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 19 13:01:37.822588 kernel: BTRFS info (device sda6): using free space tree Mar 19 13:01:37.831076 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 19 13:01:37.831160 kernel: BTRFS info (device sda6): auto enabling async discard Mar 19 13:01:37.836171 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 19 13:01:37.842423 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 13:01:37.886714 initrd-setup-root[848]: cut: /sysroot/etc/passwd: No such file or directory Mar 19 13:01:37.889676 coreos-metadata[823]: Mar 19 13:01:37.889 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 19 13:01:37.892085 coreos-metadata[823]: Mar 19 13:01:37.890 INFO Fetch successful Mar 19 13:01:37.892085 coreos-metadata[823]: Mar 19 13:01:37.891 INFO wrote hostname ci-4230-1-0-5-e04fa09f69 to /sysroot/etc/hostname Mar 19 13:01:37.893474 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 19 13:01:37.896738 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory Mar 19 13:01:37.903112 initrd-setup-root[863]: cut: /sysroot/etc/shadow: No such file or directory Mar 19 13:01:37.906824 initrd-setup-root[870]: cut: /sysroot/etc/gshadow: No such file or directory Mar 19 13:01:38.006267 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 19 13:01:38.010122 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 19 13:01:38.013144 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 19 13:01:38.023102 kernel: BTRFS info (device sda6): last unmount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 13:01:38.047126 ignition[937]: INFO : Ignition 2.20.0 Mar 19 13:01:38.047126 ignition[937]: INFO : Stage: mount Mar 19 13:01:38.047126 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 13:01:38.047126 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 19 13:01:38.051140 ignition[937]: INFO : mount: mount passed Mar 19 13:01:38.051140 ignition[937]: INFO : Ignition finished successfully Mar 19 13:01:38.048976 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 19 13:01:38.050843 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 19 13:01:38.057222 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 19 13:01:38.160231 systemd-networkd[781]: eth0: Gained IPv6LL Mar 19 13:01:38.288655 systemd-networkd[781]: eth1: Gained IPv6LL Mar 19 13:01:38.662722 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 19 13:01:38.668325 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 13:01:38.683043 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (950) Mar 19 13:01:38.686376 kernel: BTRFS info (device sda6): first mount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 13:01:38.686468 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 19 13:01:38.688089 kernel: BTRFS info (device sda6): using free space tree Mar 19 13:01:38.695790 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 19 13:01:38.695866 kernel: BTRFS info (device sda6): auto enabling async discard Mar 19 13:01:38.699162 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 13:01:38.723048 ignition[966]: INFO : Ignition 2.20.0 Mar 19 13:01:38.723048 ignition[966]: INFO : Stage: files Mar 19 13:01:38.724611 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 13:01:38.724611 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 19 13:01:38.724611 ignition[966]: DEBUG : files: compiled without relabeling support, skipping Mar 19 13:01:38.727184 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 19 13:01:38.727184 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 19 13:01:38.729359 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 19 13:01:38.730818 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 19 13:01:38.730818 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 19 13:01:38.729818 unknown[966]: wrote ssh authorized keys file for user: core Mar 19 13:01:38.733529 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Mar 19 13:01:38.733529 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Mar 19 13:01:38.733529 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 13:01:38.733529 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 13:01:38.733529 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 19 13:01:38.733529 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 19 13:01:38.733529 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 19 13:01:38.733529 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Mar 19 13:01:39.531361 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Mar 19 13:01:40.749114 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 19 13:01:40.749114 ignition[966]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Mar 19 13:01:40.753394 ignition[966]: INFO : files: op(7): op(8): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 19 13:01:40.753394 ignition[966]: INFO : files: op(7): op(8): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 19 13:01:40.753394 ignition[966]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Mar 19 13:01:40.753394 ignition[966]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 19 13:01:40.753394 ignition[966]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 19 13:01:40.753394 ignition[966]: INFO : files: files passed Mar 19 13:01:40.753394 ignition[966]: INFO : Ignition finished successfully Mar 19 13:01:40.752914 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 19 13:01:40.761218 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 19 13:01:40.780168 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 19 13:01:40.783837 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 19 13:01:40.786534 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 19 13:01:40.805149 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 13:01:40.805149 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 19 13:01:40.808861 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 13:01:40.810735 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 13:01:40.813023 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 19 13:01:40.820298 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 19 13:01:40.860406 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 19 13:01:40.860597 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 19 13:01:40.863293 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 19 13:01:40.864545 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 19 13:01:40.866484 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 19 13:01:40.876207 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 19 13:01:40.893186 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 13:01:40.900154 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 19 13:01:40.913088 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 19 13:01:40.914737 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 13:01:40.916413 systemd[1]: Stopped target timers.target - Timer Units. Mar 19 13:01:40.917079 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 19 13:01:40.917254 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 13:01:40.918991 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 19 13:01:40.919817 systemd[1]: Stopped target basic.target - Basic System. Mar 19 13:01:40.921158 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 19 13:01:40.922384 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 13:01:40.923581 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 19 13:01:40.924999 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 19 13:01:40.926415 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 13:01:40.927848 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 19 13:01:40.929189 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 19 13:01:40.930551 systemd[1]: Stopped target swap.target - Swaps. Mar 19 13:01:40.931790 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 19 13:01:40.931988 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 19 13:01:40.933346 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 19 13:01:40.934261 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 13:01:40.935471 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 19 13:01:40.935615 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 13:01:40.936914 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 19 13:01:40.937110 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 19 13:01:40.938997 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 19 13:01:40.939157 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 13:01:40.940658 systemd[1]: ignition-files.service: Deactivated successfully. Mar 19 13:01:40.940854 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 19 13:01:40.942082 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 19 13:01:40.942245 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 19 13:01:40.950543 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 19 13:01:40.954172 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 19 13:01:40.954686 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 19 13:01:40.954909 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 13:01:40.956715 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 19 13:01:40.957083 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 13:01:40.968037 ignition[1019]: INFO : Ignition 2.20.0 Mar 19 13:01:40.968037 ignition[1019]: INFO : Stage: umount Mar 19 13:01:40.969677 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 13:01:40.969677 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 19 13:01:40.968682 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 19 13:01:40.969134 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 19 13:01:40.974781 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 19 13:01:40.977049 ignition[1019]: INFO : umount: umount passed Mar 19 13:01:40.977049 ignition[1019]: INFO : Ignition finished successfully Mar 19 13:01:40.974872 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 19 13:01:40.976993 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 19 13:01:40.977059 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 19 13:01:40.979377 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 19 13:01:40.979419 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 19 13:01:40.980304 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 19 13:01:40.980339 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 19 13:01:40.981434 systemd[1]: Stopped target network.target - Network. Mar 19 13:01:40.982151 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 19 13:01:40.982191 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 13:01:40.983199 systemd[1]: Stopped target paths.target - Path Units. Mar 19 13:01:40.985240 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 19 13:01:40.989014 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 13:01:40.989710 systemd[1]: Stopped target slices.target - Slice Units. Mar 19 13:01:40.990777 systemd[1]: Stopped target sockets.target - Socket Units. Mar 19 13:01:40.992247 systemd[1]: iscsid.socket: Deactivated successfully. Mar 19 13:01:40.992283 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 13:01:40.993822 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 19 13:01:40.993858 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 13:01:40.994767 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 19 13:01:40.994815 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 19 13:01:40.995794 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 19 13:01:40.995840 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 19 13:01:40.996853 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 19 13:01:41.001597 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 19 13:01:41.004083 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 19 13:01:41.004616 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 19 13:01:41.004710 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 19 13:01:41.006750 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 19 13:01:41.006838 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 19 13:01:41.007705 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 19 13:01:41.007780 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 19 13:01:41.010909 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 19 13:01:41.011293 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 19 13:01:41.011398 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 19 13:01:41.013215 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 19 13:01:41.013790 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 19 13:01:41.013830 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 19 13:01:41.021481 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 19 13:01:41.022013 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 19 13:01:41.022070 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 13:01:41.022586 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 13:01:41.022617 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 13:01:41.023503 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 19 13:01:41.023537 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 19 13:01:41.024365 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 19 13:01:41.024402 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 13:01:41.026078 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 13:01:41.029274 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 19 13:01:41.029336 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 19 13:01:41.037782 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 19 13:01:41.037916 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 19 13:01:41.041578 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 19 13:01:41.041702 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 13:01:41.043260 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 19 13:01:41.043317 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 19 13:01:41.044314 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 19 13:01:41.044349 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 13:01:41.045554 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 19 13:01:41.045598 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 19 13:01:41.047449 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 19 13:01:41.047505 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 19 13:01:41.048857 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 13:01:41.048919 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 13:01:41.060510 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 19 13:01:41.061087 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 19 13:01:41.061143 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 13:01:41.063185 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 19 13:01:41.063224 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 13:01:41.064364 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 19 13:01:41.064416 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 13:01:41.067169 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 13:01:41.067247 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 13:01:41.069811 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 19 13:01:41.069908 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 13:01:41.070360 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 19 13:01:41.070488 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 19 13:01:41.072252 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 19 13:01:41.080166 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 19 13:01:41.087106 systemd[1]: Switching root. Mar 19 13:01:41.131241 systemd-journald[188]: Journal stopped Mar 19 13:01:42.316223 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Mar 19 13:01:42.316273 kernel: SELinux: policy capability network_peer_controls=1 Mar 19 13:01:42.316288 kernel: SELinux: policy capability open_perms=1 Mar 19 13:01:42.316297 kernel: SELinux: policy capability extended_socket_class=1 Mar 19 13:01:42.316306 kernel: SELinux: policy capability always_check_network=0 Mar 19 13:01:42.316315 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 19 13:01:42.316327 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 19 13:01:42.316338 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 19 13:01:42.316346 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 19 13:01:42.316356 kernel: audit: type=1403 audit(1742389301.305:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 19 13:01:42.316366 systemd[1]: Successfully loaded SELinux policy in 59.722ms. Mar 19 13:01:42.316390 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.544ms. Mar 19 13:01:42.316401 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 13:01:42.316411 systemd[1]: Detected virtualization kvm. Mar 19 13:01:42.316420 systemd[1]: Detected architecture x86-64. Mar 19 13:01:42.316429 systemd[1]: Detected first boot. Mar 19 13:01:42.316441 systemd[1]: Hostname set to . Mar 19 13:01:42.316450 systemd[1]: Initializing machine ID from VM UUID. Mar 19 13:01:42.316460 zram_generator::config[1064]: No configuration found. Mar 19 13:01:42.316471 kernel: Guest personality initialized and is inactive Mar 19 13:01:42.316481 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 19 13:01:42.316489 kernel: Initialized host personality Mar 19 13:01:42.316498 kernel: NET: Registered PF_VSOCK protocol family Mar 19 13:01:42.316507 systemd[1]: Populated /etc with preset unit settings. Mar 19 13:01:42.316519 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 19 13:01:42.316529 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 19 13:01:42.316538 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 19 13:01:42.316548 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 19 13:01:42.316558 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 19 13:01:42.316568 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 19 13:01:42.316578 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 19 13:01:42.316588 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 19 13:01:42.316598 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 19 13:01:42.316609 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 19 13:01:42.316619 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 19 13:01:42.316629 systemd[1]: Created slice user.slice - User and Session Slice. Mar 19 13:01:42.316638 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 13:01:42.316649 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 13:01:42.316658 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 19 13:01:42.316669 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 19 13:01:42.316681 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 19 13:01:42.316691 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 13:01:42.316701 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 19 13:01:42.316711 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 13:01:42.316721 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 19 13:01:42.316731 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 19 13:01:42.316741 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 19 13:01:42.316752 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 19 13:01:42.316762 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 13:01:42.316771 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 13:01:42.316781 systemd[1]: Reached target slices.target - Slice Units. Mar 19 13:01:42.316792 systemd[1]: Reached target swap.target - Swaps. Mar 19 13:01:42.316801 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 19 13:01:42.316811 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 19 13:01:42.316821 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 19 13:01:42.316830 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 13:01:42.316840 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 13:01:42.316851 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 13:01:42.316860 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 19 13:01:42.316869 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 19 13:01:42.316890 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 19 13:01:42.316900 systemd[1]: Mounting media.mount - External Media Directory... Mar 19 13:01:42.316910 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 13:01:42.316920 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 19 13:01:42.316929 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 19 13:01:42.316940 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 19 13:01:42.321072 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 19 13:01:42.321098 systemd[1]: Reached target machines.target - Containers. Mar 19 13:01:42.321114 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 19 13:01:42.321139 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 13:01:42.321158 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 13:01:42.321176 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 19 13:01:42.321198 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 13:01:42.321214 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 13:01:42.321230 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 13:01:42.321246 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 19 13:01:42.321262 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 13:01:42.321278 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 19 13:01:42.321293 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 19 13:01:42.321311 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 19 13:01:42.321326 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 19 13:01:42.321339 systemd[1]: Stopped systemd-fsck-usr.service. Mar 19 13:01:42.321355 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 13:01:42.321371 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 13:01:42.321386 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 13:01:42.321401 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 19 13:01:42.321415 kernel: ACPI: bus type drm_connector registered Mar 19 13:01:42.321431 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 19 13:01:42.321450 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 19 13:01:42.321464 kernel: fuse: init (API version 7.39) Mar 19 13:01:42.321477 kernel: loop: module loaded Mar 19 13:01:42.321492 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 13:01:42.321507 systemd[1]: verity-setup.service: Deactivated successfully. Mar 19 13:01:42.321523 systemd[1]: Stopped verity-setup.service. Mar 19 13:01:42.321543 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 13:01:42.321563 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 19 13:01:42.321613 systemd-journald[1148]: Collecting audit messages is disabled. Mar 19 13:01:42.321649 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 19 13:01:42.321667 systemd[1]: Mounted media.mount - External Media Directory. Mar 19 13:01:42.321682 systemd-journald[1148]: Journal started Mar 19 13:01:42.321712 systemd-journald[1148]: Runtime Journal (/run/log/journal/a22ba62815fc4bc88f789d390cc7cc96) is 4.8M, max 38.3M, 33.5M free. Mar 19 13:01:41.995850 systemd[1]: Queued start job for default target multi-user.target. Mar 19 13:01:42.327537 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 13:01:42.010968 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 19 13:01:42.011637 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 19 13:01:42.326804 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 19 13:01:42.328515 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 19 13:01:42.337717 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 19 13:01:42.340060 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 13:01:42.341519 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 19 13:01:42.343624 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 19 13:01:42.343871 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 19 13:01:42.345187 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 13:01:42.345489 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 13:01:42.346577 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 13:01:42.346875 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 13:01:42.347894 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 13:01:42.348152 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 13:01:42.349193 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 19 13:01:42.349397 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 19 13:01:42.350381 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 13:01:42.350554 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 13:01:42.351721 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 13:01:42.352812 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 19 13:01:42.353906 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 19 13:01:42.362634 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 19 13:01:42.369509 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 19 13:01:42.377672 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 19 13:01:42.384062 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 19 13:01:42.384703 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 19 13:01:42.384748 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 13:01:42.388094 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 19 13:01:42.403893 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 19 13:01:42.407874 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 19 13:01:42.408616 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 13:01:42.411689 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 19 13:01:42.418430 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 19 13:01:42.419620 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 13:01:42.426596 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 19 13:01:42.427788 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 13:01:42.431765 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 13:01:42.435080 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 19 13:01:42.438181 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 13:01:42.441831 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 13:01:42.443820 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 19 13:01:42.444457 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 19 13:01:42.446297 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 19 13:01:42.453251 systemd-journald[1148]: Time spent on flushing to /var/log/journal/a22ba62815fc4bc88f789d390cc7cc96 is 87.260ms for 1132 entries. Mar 19 13:01:42.453251 systemd-journald[1148]: System Journal (/var/log/journal/a22ba62815fc4bc88f789d390cc7cc96) is 8M, max 584.8M, 576.8M free. Mar 19 13:01:42.572058 systemd-journald[1148]: Received client request to flush runtime journal. Mar 19 13:01:42.572186 kernel: loop0: detected capacity change from 0 to 218376 Mar 19 13:01:42.575088 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 19 13:01:42.575138 kernel: loop1: detected capacity change from 0 to 147912 Mar 19 13:01:42.457687 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 19 13:01:42.459456 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 19 13:01:42.462304 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 19 13:01:42.475792 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 19 13:01:42.505316 udevadm[1198]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 19 13:01:42.532675 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 13:01:42.534484 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Mar 19 13:01:42.534496 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Mar 19 13:01:42.547325 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 13:01:42.556241 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 19 13:01:42.568082 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 19 13:01:42.580810 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 19 13:01:42.611640 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 19 13:01:42.623349 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 13:01:42.628204 kernel: loop2: detected capacity change from 0 to 138176 Mar 19 13:01:42.640603 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Mar 19 13:01:42.640620 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Mar 19 13:01:42.650067 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 13:01:42.679066 kernel: loop3: detected capacity change from 0 to 8 Mar 19 13:01:42.700983 kernel: loop4: detected capacity change from 0 to 218376 Mar 19 13:01:42.738034 kernel: loop5: detected capacity change from 0 to 147912 Mar 19 13:01:42.760996 kernel: loop6: detected capacity change from 0 to 138176 Mar 19 13:01:42.795395 kernel: loop7: detected capacity change from 0 to 8 Mar 19 13:01:42.794708 (sd-merge)[1218]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 19 13:01:42.795120 (sd-merge)[1218]: Merged extensions into '/usr'. Mar 19 13:01:42.800611 systemd[1]: Reload requested from client PID 1190 ('systemd-sysext') (unit systemd-sysext.service)... Mar 19 13:01:42.800626 systemd[1]: Reloading... Mar 19 13:01:42.890977 zram_generator::config[1245]: No configuration found. Mar 19 13:01:43.007016 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 13:01:43.062461 ldconfig[1185]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 19 13:01:43.082007 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 19 13:01:43.082237 systemd[1]: Reloading finished in 281 ms. Mar 19 13:01:43.099653 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 19 13:01:43.100864 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 19 13:01:43.111075 systemd[1]: Starting ensure-sysext.service... Mar 19 13:01:43.114825 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 13:01:43.134014 systemd[1]: Reload requested from client PID 1289 ('systemctl') (unit ensure-sysext.service)... Mar 19 13:01:43.134169 systemd[1]: Reloading... Mar 19 13:01:43.141874 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 19 13:01:43.142403 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 19 13:01:43.143158 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 19 13:01:43.143487 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Mar 19 13:01:43.143590 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Mar 19 13:01:43.149768 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 13:01:43.149780 systemd-tmpfiles[1290]: Skipping /boot Mar 19 13:01:43.173076 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 13:01:43.174019 systemd-tmpfiles[1290]: Skipping /boot Mar 19 13:01:43.225072 zram_generator::config[1319]: No configuration found. Mar 19 13:01:43.334289 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 13:01:43.404631 systemd[1]: Reloading finished in 270 ms. Mar 19 13:01:43.416322 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 19 13:01:43.425485 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 13:01:43.436197 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 13:01:43.442130 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 19 13:01:43.444674 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 19 13:01:43.450281 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 13:01:43.459342 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 13:01:43.463107 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 19 13:01:43.469747 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 13:01:43.470398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 13:01:43.480303 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 13:01:43.492329 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 13:01:43.498277 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 13:01:43.499718 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 13:01:43.501121 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 13:01:43.501257 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 13:01:43.503199 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 19 13:01:43.506229 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 13:01:43.506458 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 13:01:43.512243 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 13:01:43.512455 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 13:01:43.513614 systemd-udevd[1369]: Using default interface naming scheme 'v255'. Mar 19 13:01:43.526171 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 13:01:43.526400 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 13:01:43.538671 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 19 13:01:43.543531 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 13:01:43.543822 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 13:01:43.546499 augenrules[1397]: No rules Mar 19 13:01:43.550341 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 13:01:43.557474 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 13:01:43.566399 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 13:01:43.571101 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 13:01:43.571778 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 13:01:43.572022 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 13:01:43.579110 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 19 13:01:43.582738 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 19 13:01:43.583318 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 13:01:43.585675 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 13:01:43.588657 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 13:01:43.588840 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 13:01:43.590053 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 19 13:01:43.591738 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 13:01:43.591867 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 13:01:43.593699 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 13:01:43.593859 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 13:01:43.595665 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 13:01:43.595999 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 13:01:43.610014 systemd[1]: Finished ensure-sysext.service. Mar 19 13:01:43.623195 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 13:01:43.623713 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 13:01:43.629179 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 19 13:01:43.629714 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 19 13:01:43.630006 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 13:01:43.630462 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 13:01:43.632652 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 13:01:43.637500 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 19 13:01:43.667036 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 19 13:01:43.693437 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 19 13:01:43.795569 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 19 13:01:43.803044 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1407) Mar 19 13:01:43.815077 kernel: ACPI: button: Power Button [PWRF] Mar 19 13:01:43.860793 systemd-networkd[1429]: lo: Link UP Mar 19 13:01:43.860802 systemd-networkd[1429]: lo: Gained carrier Mar 19 13:01:43.869348 systemd-networkd[1429]: Enumeration completed Mar 19 13:01:43.870272 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 13:01:43.870688 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 13:01:43.870692 systemd-networkd[1429]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 13:01:43.871118 systemd-networkd[1429]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 13:01:43.871122 systemd-networkd[1429]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 13:01:43.871402 systemd-networkd[1429]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 13:01:43.871741 systemd-networkd[1429]: eth0: Link UP Mar 19 13:01:43.871793 systemd-networkd[1429]: eth0: Gained carrier Mar 19 13:01:43.871838 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 13:01:43.877452 systemd-networkd[1429]: eth1: Link UP Mar 19 13:01:43.877557 systemd-networkd[1429]: eth1: Gained carrier Mar 19 13:01:43.877608 systemd-networkd[1429]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 13:01:43.880632 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 19 13:01:43.880716 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 13:01:43.880846 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 13:01:43.883351 systemd-resolved[1368]: Positive Trust Anchors: Mar 19 13:01:43.883374 systemd-resolved[1368]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 13:01:43.883422 systemd-resolved[1368]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 13:01:43.889469 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 13:01:43.893092 systemd-resolved[1368]: Using system hostname 'ci-4230-1-0-5-e04fa09f69'. Mar 19 13:01:43.896122 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 13:01:43.899078 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 13:01:43.900027 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 13:01:43.900059 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 13:01:43.903079 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 19 13:01:43.907070 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 19 13:01:43.907575 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 19 13:01:43.907591 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 13:01:43.907762 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 19 13:01:43.908381 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 13:01:43.910126 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 13:01:43.910272 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 13:01:43.915084 systemd[1]: Reached target network.target - Network. Mar 19 13:01:43.915512 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 13:01:43.916002 systemd[1]: Reached target time-set.target - System Time Set. Mar 19 13:01:43.917054 systemd-networkd[1429]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 19 13:01:43.918519 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. Mar 19 13:01:43.928560 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 13:01:43.930017 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 13:01:43.930906 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 13:01:43.931766 systemd-networkd[1429]: eth0: DHCPv4 address 157.180.44.40/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 19 13:01:43.933046 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. Mar 19 13:01:43.933570 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 13:01:43.933699 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 13:01:43.934335 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 13:01:43.944363 kernel: mousedev: PS/2 mouse device common for all mice Mar 19 13:01:43.949054 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 19 13:01:43.973526 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 19 13:01:43.979980 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Mar 19 13:01:43.981273 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 19 13:01:43.986965 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Mar 19 13:01:43.995016 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 19 13:01:43.996143 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 19 13:01:44.003397 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 19 13:01:44.005653 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Mar 19 13:01:44.008211 kernel: EDAC MC: Ver: 3.0.0 Mar 19 13:01:44.024050 kernel: Console: switching to colour dummy device 80x25 Mar 19 13:01:44.024484 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 13:01:44.027538 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 19 13:01:44.027574 kernel: [drm] features: -context_init Mar 19 13:01:44.029972 kernel: [drm] number of scanouts: 1 Mar 19 13:01:44.031312 kernel: [drm] number of cap sets: 0 Mar 19 13:01:44.033970 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 19 13:01:44.044039 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 19 13:01:44.044113 kernel: Console: switching to colour frame buffer device 160x50 Mar 19 13:01:44.055261 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 19 13:01:44.059261 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 19 13:01:44.061748 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 13:01:44.062044 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 13:01:44.069329 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 13:01:44.073214 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 13:01:44.073484 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 13:01:44.083194 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 13:01:44.140341 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 13:01:44.187455 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 19 13:01:44.193283 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 19 13:01:44.204095 lvm[1487]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 13:01:44.228647 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 19 13:01:44.229877 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 13:01:44.230055 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 13:01:44.230246 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 19 13:01:44.230364 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 19 13:01:44.230653 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 19 13:01:44.230836 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 19 13:01:44.230937 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 19 13:01:44.231039 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 19 13:01:44.231070 systemd[1]: Reached target paths.target - Path Units. Mar 19 13:01:44.231138 systemd[1]: Reached target timers.target - Timer Units. Mar 19 13:01:44.233408 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 19 13:01:44.235591 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 19 13:01:44.241058 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 19 13:01:44.243224 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 19 13:01:44.243982 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 19 13:01:44.259995 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 19 13:01:44.262175 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 19 13:01:44.278299 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 19 13:01:44.279873 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 19 13:01:44.283197 lvm[1491]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 13:01:44.283365 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 13:01:44.285468 systemd[1]: Reached target basic.target - Basic System. Mar 19 13:01:44.286288 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 19 13:01:44.286338 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 19 13:01:44.292095 systemd[1]: Starting containerd.service - containerd container runtime... Mar 19 13:01:44.301228 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 19 13:01:44.312587 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 19 13:01:44.315864 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 19 13:01:44.326154 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 19 13:01:44.328220 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 19 13:01:44.333085 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 19 13:01:44.336287 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 19 13:01:44.340223 jq[1497]: false Mar 19 13:01:44.346503 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 19 13:01:44.351774 coreos-metadata[1493]: Mar 19 13:01:44.351 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 19 13:01:44.358299 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 19 13:01:44.364759 coreos-metadata[1493]: Mar 19 13:01:44.364 INFO Fetch successful Mar 19 13:01:44.365799 coreos-metadata[1493]: Mar 19 13:01:44.365 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 19 13:01:44.367975 coreos-metadata[1493]: Mar 19 13:01:44.366 INFO Fetch successful Mar 19 13:01:44.371579 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 19 13:01:44.377182 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 19 13:01:44.377814 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 19 13:01:44.380788 systemd[1]: Starting update-engine.service - Update Engine... Mar 19 13:01:44.386075 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 19 13:01:44.388437 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 19 13:01:44.398310 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 19 13:01:44.398983 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 19 13:01:44.399296 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 19 13:01:44.399479 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 19 13:01:44.411530 dbus-daemon[1496]: [system] SELinux support is enabled Mar 19 13:01:44.419552 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 19 13:01:44.426646 extend-filesystems[1498]: Found loop4 Mar 19 13:01:44.426646 extend-filesystems[1498]: Found loop5 Mar 19 13:01:44.426646 extend-filesystems[1498]: Found loop6 Mar 19 13:01:44.426646 extend-filesystems[1498]: Found loop7 Mar 19 13:01:44.426646 extend-filesystems[1498]: Found sda Mar 19 13:01:44.426646 extend-filesystems[1498]: Found sda1 Mar 19 13:01:44.426646 extend-filesystems[1498]: Found sda2 Mar 19 13:01:44.426646 extend-filesystems[1498]: Found sda3 Mar 19 13:01:44.426646 extend-filesystems[1498]: Found usr Mar 19 13:01:44.426646 extend-filesystems[1498]: Found sda4 Mar 19 13:01:44.426646 extend-filesystems[1498]: Found sda6 Mar 19 13:01:44.426646 extend-filesystems[1498]: Found sda7 Mar 19 13:01:44.426646 extend-filesystems[1498]: Found sda9 Mar 19 13:01:44.426646 extend-filesystems[1498]: Checking size of /dev/sda9 Mar 19 13:01:44.504283 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 19 13:01:44.504662 update_engine[1508]: I20250319 13:01:44.470588 1508 main.cc:92] Flatcar Update Engine starting Mar 19 13:01:44.504662 update_engine[1508]: I20250319 13:01:44.472261 1508 update_check_scheduler.cc:74] Next update check in 8m2s Mar 19 13:01:44.425931 systemd[1]: motdgen.service: Deactivated successfully. Mar 19 13:01:44.507800 extend-filesystems[1498]: Resized partition /dev/sda9 Mar 19 13:01:44.434631 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 19 13:01:44.518280 extend-filesystems[1533]: resize2fs 1.47.1 (20-May-2024) Mar 19 13:01:44.458253 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 19 13:01:44.522071 jq[1511]: true Mar 19 13:01:44.458279 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 19 13:01:44.469148 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 19 13:01:44.469169 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 19 13:01:44.538237 jq[1528]: true Mar 19 13:01:44.472517 systemd[1]: Started update-engine.service - Update Engine. Mar 19 13:01:44.473786 (ntainerd)[1526]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 19 13:01:44.501203 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 19 13:01:44.550704 systemd-logind[1505]: New seat seat0. Mar 19 13:01:44.559053 systemd-logind[1505]: Watching system buttons on /dev/input/event2 (Power Button) Mar 19 13:01:44.559068 systemd-logind[1505]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 19 13:01:44.559686 systemd[1]: Started systemd-logind.service - User Login Management. Mar 19 13:01:44.605047 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 19 13:01:44.606912 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 19 13:01:44.641970 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1412) Mar 19 13:01:44.694451 bash[1559]: Updated "/home/core/.ssh/authorized_keys" Mar 19 13:01:44.698438 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 19 13:01:44.715334 systemd[1]: Starting sshkeys.service... Mar 19 13:01:44.726207 locksmithd[1534]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 19 13:01:44.747983 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 19 13:01:44.768021 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 19 13:01:44.760304 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 19 13:01:44.797071 coreos-metadata[1572]: Mar 19 13:01:44.783 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 19 13:01:44.797071 coreos-metadata[1572]: Mar 19 13:01:44.783 INFO Fetch successful Mar 19 13:01:44.799013 extend-filesystems[1533]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 19 13:01:44.799013 extend-filesystems[1533]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 19 13:01:44.799013 extend-filesystems[1533]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 19 13:01:44.813646 extend-filesystems[1498]: Resized filesystem in /dev/sda9 Mar 19 13:01:44.813646 extend-filesystems[1498]: Found sr0 Mar 19 13:01:44.801416 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 19 13:01:44.801608 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 19 13:01:44.801686 unknown[1572]: wrote ssh authorized keys file for user: core Mar 19 13:01:44.845988 update-ssh-keys[1577]: Updated "/home/core/.ssh/authorized_keys" Mar 19 13:01:44.848249 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 19 13:01:44.857054 systemd[1]: Finished sshkeys.service. Mar 19 13:01:44.862844 containerd[1526]: time="2025-03-19T13:01:44.862751446Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 19 13:01:44.897538 containerd[1526]: time="2025-03-19T13:01:44.897464185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 19 13:01:44.899509 containerd[1526]: time="2025-03-19T13:01:44.899465978Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 19 13:01:44.899615 containerd[1526]: time="2025-03-19T13:01:44.899597796Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 19 13:01:44.899688 containerd[1526]: time="2025-03-19T13:01:44.899672556Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 19 13:01:44.899969 containerd[1526]: time="2025-03-19T13:01:44.899934347Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 19 13:01:44.900057 containerd[1526]: time="2025-03-19T13:01:44.900043371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 19 13:01:44.900186 containerd[1526]: time="2025-03-19T13:01:44.900167864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 13:01:44.900244 containerd[1526]: time="2025-03-19T13:01:44.900231894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 19 13:01:44.900528 containerd[1526]: time="2025-03-19T13:01:44.900507431Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 13:01:44.901287 containerd[1526]: time="2025-03-19T13:01:44.900577542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 19 13:01:44.901287 containerd[1526]: time="2025-03-19T13:01:44.900600095Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 13:01:44.901287 containerd[1526]: time="2025-03-19T13:01:44.900612558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 19 13:01:44.901287 containerd[1526]: time="2025-03-19T13:01:44.900694371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 19 13:01:44.901287 containerd[1526]: time="2025-03-19T13:01:44.900924964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 19 13:01:44.901287 containerd[1526]: time="2025-03-19T13:01:44.901099601Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 13:01:44.901287 containerd[1526]: time="2025-03-19T13:01:44.901116122Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 19 13:01:44.901287 containerd[1526]: time="2025-03-19T13:01:44.901206862Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 19 13:01:44.901287 containerd[1526]: time="2025-03-19T13:01:44.901257216Z" level=info msg="metadata content store policy set" policy=shared Mar 19 13:01:44.906756 containerd[1526]: time="2025-03-19T13:01:44.906702816Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 19 13:01:44.906998 containerd[1526]: time="2025-03-19T13:01:44.906939680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 19 13:01:44.907150 containerd[1526]: time="2025-03-19T13:01:44.907132983Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 19 13:01:44.907246 containerd[1526]: time="2025-03-19T13:01:44.907229954Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 19 13:01:44.907337 containerd[1526]: time="2025-03-19T13:01:44.907321966Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 19 13:01:44.907715 containerd[1526]: time="2025-03-19T13:01:44.907695698Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 19 13:01:44.908226 containerd[1526]: time="2025-03-19T13:01:44.908193861Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 19 13:01:44.908429 containerd[1526]: time="2025-03-19T13:01:44.908410407Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 19 13:01:44.908524 containerd[1526]: time="2025-03-19T13:01:44.908507980Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 19 13:01:44.908626 containerd[1526]: time="2025-03-19T13:01:44.908610142Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 19 13:01:44.908772 containerd[1526]: time="2025-03-19T13:01:44.908696263Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 19 13:01:44.908772 containerd[1526]: time="2025-03-19T13:01:44.908716701Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 19 13:01:44.908772 containerd[1526]: time="2025-03-19T13:01:44.908735267Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 19 13:01:44.908963 containerd[1526]: time="2025-03-19T13:01:44.908895828Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 19 13:01:44.908963 containerd[1526]: time="2025-03-19T13:01:44.908927076Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 19 13:01:44.909427 containerd[1526]: time="2025-03-19T13:01:44.909179039Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 19 13:01:44.909427 containerd[1526]: time="2025-03-19T13:01:44.909205709Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 19 13:01:44.909427 containerd[1526]: time="2025-03-19T13:01:44.909221267Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 19 13:01:44.909427 containerd[1526]: time="2025-03-19T13:01:44.909268776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 19 13:01:44.909427 containerd[1526]: time="2025-03-19T13:01:44.909289395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 19 13:01:44.909427 containerd[1526]: time="2025-03-19T13:01:44.909306417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 19 13:01:44.909427 containerd[1526]: time="2025-03-19T13:01:44.909347053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 19 13:01:44.909427 containerd[1526]: time="2025-03-19T13:01:44.909367762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 19 13:01:44.909427 containerd[1526]: time="2025-03-19T13:01:44.909386557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 19 13:01:44.909427 containerd[1526]: time="2025-03-19T13:01:44.909402667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 19 13:01:44.909960 containerd[1526]: time="2025-03-19T13:01:44.909715925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 19 13:01:44.909960 containerd[1526]: time="2025-03-19T13:01:44.909741914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 19 13:01:44.909960 containerd[1526]: time="2025-03-19T13:01:44.909764285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 19 13:01:44.909960 containerd[1526]: time="2025-03-19T13:01:44.909800553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 19 13:01:44.909960 containerd[1526]: time="2025-03-19T13:01:44.909818898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 19 13:01:44.909960 containerd[1526]: time="2025-03-19T13:01:44.909835710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 19 13:01:44.909960 containerd[1526]: time="2025-03-19T13:01:44.909871486Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 19 13:01:44.909960 containerd[1526]: time="2025-03-19T13:01:44.909920378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 19 13:01:44.910413 containerd[1526]: time="2025-03-19T13:01:44.910197748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 19 13:01:44.910413 containerd[1526]: time="2025-03-19T13:01:44.910223306Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 19 13:01:44.910413 containerd[1526]: time="2025-03-19T13:01:44.910310229Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 19 13:01:44.910608 containerd[1526]: time="2025-03-19T13:01:44.910338332Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 19 13:01:44.910608 containerd[1526]: time="2025-03-19T13:01:44.910546692Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 19 13:01:44.910608 containerd[1526]: time="2025-03-19T13:01:44.910566139Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 19 13:01:44.910608 containerd[1526]: time="2025-03-19T13:01:44.910578863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 19 13:01:44.910840 containerd[1526]: time="2025-03-19T13:01:44.910705400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 19 13:01:44.910840 containerd[1526]: time="2025-03-19T13:01:44.910727551Z" level=info msg="NRI interface is disabled by configuration." Mar 19 13:01:44.910840 containerd[1526]: time="2025-03-19T13:01:44.910746878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 19 13:01:44.911738 containerd[1526]: time="2025-03-19T13:01:44.911525357Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 19 13:01:44.911738 containerd[1526]: time="2025-03-19T13:01:44.911593655Z" level=info msg="Connect containerd service" Mar 19 13:01:44.911738 containerd[1526]: time="2025-03-19T13:01:44.911624143Z" level=info msg="using legacy CRI server" Mar 19 13:01:44.911738 containerd[1526]: time="2025-03-19T13:01:44.911631586Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 19 13:01:44.912418 containerd[1526]: time="2025-03-19T13:01:44.912089825Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 19 13:01:44.913016 containerd[1526]: time="2025-03-19T13:01:44.912990534Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 13:01:44.913583 containerd[1526]: time="2025-03-19T13:01:44.913176603Z" level=info msg="Start subscribing containerd event" Mar 19 13:01:44.913583 containerd[1526]: time="2025-03-19T13:01:44.913227428Z" level=info msg="Start recovering state" Mar 19 13:01:44.913583 containerd[1526]: time="2025-03-19T13:01:44.913303380Z" level=info msg="Start event monitor" Mar 19 13:01:44.913583 containerd[1526]: time="2025-03-19T13:01:44.913328798Z" level=info msg="Start snapshots syncer" Mar 19 13:01:44.913583 containerd[1526]: time="2025-03-19T13:01:44.913338827Z" level=info msg="Start cni network conf syncer for default" Mar 19 13:01:44.913583 containerd[1526]: time="2025-03-19T13:01:44.913347804Z" level=info msg="Start streaming server" Mar 19 13:01:44.913923 containerd[1526]: time="2025-03-19T13:01:44.913901011Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 19 13:01:44.914048 containerd[1526]: time="2025-03-19T13:01:44.914031445Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 19 13:01:44.914166 containerd[1526]: time="2025-03-19T13:01:44.914151300Z" level=info msg="containerd successfully booted in 0.057500s" Mar 19 13:01:44.914269 systemd[1]: Started containerd.service - containerd container runtime. Mar 19 13:01:45.185348 sshd_keygen[1525]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 19 13:01:45.233876 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 19 13:01:45.248528 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 19 13:01:45.256860 systemd[1]: issuegen.service: Deactivated successfully. Mar 19 13:01:45.257122 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 19 13:01:45.264102 systemd-networkd[1429]: eth1: Gained IPv6LL Mar 19 13:01:45.264783 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 19 13:01:45.267086 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. Mar 19 13:01:45.270114 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 19 13:01:45.275116 systemd[1]: Reached target network-online.target - Network is Online. Mar 19 13:01:45.284147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 13:01:45.286785 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 19 13:01:45.290611 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 19 13:01:45.304260 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 19 13:01:45.315247 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 19 13:01:45.315962 systemd[1]: Reached target getty.target - Login Prompts. Mar 19 13:01:45.324658 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 19 13:01:45.328072 systemd-networkd[1429]: eth0: Gained IPv6LL Mar 19 13:01:45.329129 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. Mar 19 13:01:46.343577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 13:01:46.347825 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 19 13:01:46.348226 (kubelet)[1619]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 13:01:46.353837 systemd[1]: Startup finished in 1.385s (kernel) + 7.587s (initrd) + 5.106s (userspace) = 14.078s. Mar 19 13:01:47.113305 kubelet[1619]: E0319 13:01:47.113228 1619 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 13:01:47.116529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 13:01:47.116714 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 13:01:47.117245 systemd[1]: kubelet.service: Consumed 1.220s CPU time, 253.2M memory peak. Mar 19 13:01:57.160686 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 19 13:01:57.166222 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 13:01:57.294242 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 13:01:57.305295 (kubelet)[1638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 13:01:57.348724 kubelet[1638]: E0319 13:01:57.348616 1638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 13:01:57.351176 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 13:01:57.351413 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 13:01:57.352028 systemd[1]: kubelet.service: Consumed 158ms CPU time, 102.3M memory peak. Mar 19 13:02:07.411379 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 19 13:02:07.418284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 13:02:07.568660 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 13:02:07.572148 (kubelet)[1654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 13:02:07.632424 kubelet[1654]: E0319 13:02:07.632301 1654 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 13:02:07.634228 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 13:02:07.634426 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 13:02:07.634926 systemd[1]: kubelet.service: Consumed 200ms CPU time, 103.8M memory peak. Mar 19 13:02:16.347923 systemd-timesyncd[1430]: Contacted time server 188.174.253.188:123 (2.flatcar.pool.ntp.org). Mar 19 13:02:16.347996 systemd-timesyncd[1430]: Initial clock synchronization to Wed 2025-03-19 13:02:16.347642 UTC. Mar 19 13:02:16.348831 systemd-resolved[1368]: Clock change detected. Flushing caches. Mar 19 13:02:18.315698 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 19 13:02:18.321212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 13:02:18.454177 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 13:02:18.456627 (kubelet)[1670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 13:02:18.504181 kubelet[1670]: E0319 13:02:18.504114 1670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 13:02:18.507188 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 13:02:18.507339 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 13:02:18.507692 systemd[1]: kubelet.service: Consumed 166ms CPU time, 102M memory peak. Mar 19 13:02:28.566500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 19 13:02:28.573284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 13:02:28.698198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 13:02:28.710260 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 13:02:28.756991 kubelet[1686]: E0319 13:02:28.756942 1686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 13:02:28.759807 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 13:02:28.760008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 13:02:28.760300 systemd[1]: kubelet.service: Consumed 161ms CPU time, 101.4M memory peak. Mar 19 13:02:29.975488 update_engine[1508]: I20250319 13:02:29.975085 1508 update_attempter.cc:509] Updating boot flags... Mar 19 13:02:30.027977 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1702) Mar 19 13:02:30.086982 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1703) Mar 19 13:02:38.815683 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 19 13:02:38.821387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 13:02:38.944697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 13:02:38.948747 (kubelet)[1719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 13:02:38.999710 kubelet[1719]: E0319 13:02:38.999597 1719 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 13:02:39.002478 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 13:02:39.002640 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 13:02:39.003107 systemd[1]: kubelet.service: Consumed 164ms CPU time, 103.6M memory peak. Mar 19 13:02:49.065785 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 19 13:02:49.071550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 13:02:49.256514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 13:02:49.261413 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 13:02:49.308240 kubelet[1734]: E0319 13:02:49.308186 1734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 13:02:49.311116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 13:02:49.311256 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 13:02:49.311530 systemd[1]: kubelet.service: Consumed 203ms CPU time, 103M memory peak. Mar 19 13:02:59.315613 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 19 13:02:59.321213 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 13:02:59.445768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 13:02:59.450785 (kubelet)[1750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 13:02:59.500158 kubelet[1750]: E0319 13:02:59.500066 1750 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 13:02:59.504020 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 13:02:59.504226 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 13:02:59.504652 systemd[1]: kubelet.service: Consumed 173ms CPU time, 103.3M memory peak. Mar 19 13:03:09.565510 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 19 13:03:09.570297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 13:03:09.696968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 13:03:09.706151 (kubelet)[1766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 13:03:09.747840 kubelet[1766]: E0319 13:03:09.747736 1766 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 13:03:09.749389 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 13:03:09.749589 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 13:03:09.749994 systemd[1]: kubelet.service: Consumed 148ms CPU time, 102.8M memory peak. Mar 19 13:03:19.815775 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 19 13:03:19.821372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 13:03:19.990186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 13:03:19.993769 (kubelet)[1781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 13:03:20.032526 kubelet[1781]: E0319 13:03:20.032469 1781 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 13:03:20.035316 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 13:03:20.035611 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 13:03:20.036134 systemd[1]: kubelet.service: Consumed 179ms CPU time, 105.5M memory peak. Mar 19 13:03:30.035220 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 19 13:03:30.041349 systemd[1]: Started sshd@0-157.180.44.40:22-139.178.68.195:54752.service - OpenSSH per-connection server daemon (139.178.68.195:54752). Mar 19 13:03:30.042579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Mar 19 13:03:30.055068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 13:03:30.187820 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 13:03:30.200429 (kubelet)[1799]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 13:03:30.254081 kubelet[1799]: E0319 13:03:30.254003 1799 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 13:03:30.256986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 13:03:30.257160 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 13:03:30.257663 systemd[1]: kubelet.service: Consumed 159ms CPU time, 102M memory peak. Mar 19 13:03:31.039341 sshd[1789]: Accepted publickey for core from 139.178.68.195 port 54752 ssh2: RSA SHA256:L15lBNHhpcYMbNb2J6/f4OVxa0+buv89ULzmbi6dY0s Mar 19 13:03:31.041859 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 13:03:31.054867 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 19 13:03:31.062295 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 19 13:03:31.067953 systemd-logind[1505]: New session 1 of user core. Mar 19 13:03:31.077501 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 19 13:03:31.084323 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 19 13:03:31.089468 (systemd)[1809]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 19 13:03:31.091945 systemd-logind[1505]: New session c1 of user core. Mar 19 13:03:31.255689 systemd[1809]: Queued start job for default target default.target. Mar 19 13:03:31.262176 systemd[1809]: Created slice app.slice - User Application Slice. Mar 19 13:03:31.262439 systemd[1809]: Reached target paths.target - Paths. Mar 19 13:03:31.262503 systemd[1809]: Reached target timers.target - Timers. Mar 19 13:03:31.264449 systemd[1809]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 19 13:03:31.280232 systemd[1809]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 19 13:03:31.280464 systemd[1809]: Reached target sockets.target - Sockets. Mar 19 13:03:31.280527 systemd[1809]: Reached target basic.target - Basic System. Mar 19 13:03:31.280567 systemd[1809]: Reached target default.target - Main User Target. Mar 19 13:03:31.280616 systemd[1809]: Startup finished in 182ms. Mar 19 13:03:31.280817 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 19 13:03:31.293210 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 19 13:03:31.996280 systemd[1]: Started sshd@1-157.180.44.40:22-139.178.68.195:54762.service - OpenSSH per-connection server daemon (139.178.68.195:54762). Mar 19 13:03:32.996998 sshd[1820]: Accepted publickey for core from 139.178.68.195 port 54762 ssh2: RSA SHA256:L15lBNHhpcYMbNb2J6/f4OVxa0+buv89ULzmbi6dY0s Mar 19 13:03:32.998702 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 13:03:33.004444 systemd-logind[1505]: New session 2 of user core. Mar 19 13:03:33.012204 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 19 13:03:33.690025 sshd[1822]: Connection closed by 139.178.68.195 port 54762 Mar 19 13:03:33.690917 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Mar 19 13:03:33.695957 systemd[1]: sshd@1-157.180.44.40:22-139.178.68.195:54762.service: Deactivated successfully. Mar 19 13:03:33.698854 systemd[1]: session-2.scope: Deactivated successfully. Mar 19 13:03:33.700650 systemd-logind[1505]: Session 2 logged out. Waiting for processes to exit. Mar 19 13:03:33.702387 systemd-logind[1505]: Removed session 2. Mar 19 13:03:33.865376 systemd[1]: Started sshd@2-157.180.44.40:22-139.178.68.195:54776.service - OpenSSH per-connection server daemon (139.178.68.195:54776). Mar 19 13:03:34.857164 sshd[1828]: Accepted publickey for core from 139.178.68.195 port 54776 ssh2: RSA SHA256:L15lBNHhpcYMbNb2J6/f4OVxa0+buv89ULzmbi6dY0s Mar 19 13:03:34.858843 sshd-session[1828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 13:03:34.865499 systemd-logind[1505]: New session 3 of user core. Mar 19 13:03:34.872309 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 19 13:03:35.539023 sshd[1830]: Connection closed by 139.178.68.195 port 54776 Mar 19 13:03:35.539836 sshd-session[1828]: pam_unix(sshd:session): session closed for user core Mar 19 13:03:35.543221 systemd[1]: sshd@2-157.180.44.40:22-139.178.68.195:54776.service: Deactivated successfully. Mar 19 13:03:35.545613 systemd[1]: session-3.scope: Deactivated successfully. Mar 19 13:03:35.547698 systemd-logind[1505]: Session 3 logged out. Waiting for processes to exit. Mar 19 13:03:35.549237 systemd-logind[1505]: Removed session 3. Mar 19 13:03:35.717400 systemd[1]: Started sshd@3-157.180.44.40:22-139.178.68.195:54784.service - OpenSSH per-connection server daemon (139.178.68.195:54784). Mar 19 13:03:36.715725 sshd[1836]: Accepted publickey for core from 139.178.68.195 port 54784 ssh2: RSA SHA256:L15lBNHhpcYMbNb2J6/f4OVxa0+buv89ULzmbi6dY0s Mar 19 13:03:36.717785 sshd-session[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 13:03:36.725341 systemd-logind[1505]: New session 4 of user core. Mar 19 13:03:36.741234 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 19 13:03:37.404334 sshd[1838]: Connection closed by 139.178.68.195 port 54784 Mar 19 13:03:37.405146 sshd-session[1836]: pam_unix(sshd:session): session closed for user core Mar 19 13:03:37.409755 systemd[1]: sshd@3-157.180.44.40:22-139.178.68.195:54784.service: Deactivated successfully. Mar 19 13:03:37.411724 systemd[1]: session-4.scope: Deactivated successfully. Mar 19 13:03:37.412644 systemd-logind[1505]: Session 4 logged out. Waiting for processes to exit. Mar 19 13:03:37.414365 systemd-logind[1505]: Removed session 4. Mar 19 13:03:37.579350 systemd[1]: Started sshd@4-157.180.44.40:22-139.178.68.195:37228.service - OpenSSH per-connection server daemon (139.178.68.195:37228). Mar 19 13:03:38.564142 sshd[1844]: Accepted publickey for core from 139.178.68.195 port 37228 ssh2: RSA SHA256:L15lBNHhpcYMbNb2J6/f4OVxa0+buv89ULzmbi6dY0s Mar 19 13:03:38.565864 sshd-session[1844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 13:03:38.572082 systemd-logind[1505]: New session 5 of user core. Mar 19 13:03:38.578199 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 19 13:03:39.096832 sudo[1847]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 19 13:03:39.097359 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 13:03:39.109966 sudo[1847]: pam_unix(sudo:session): session closed for user root Mar 19 13:03:39.268707 sshd[1846]: Connection closed by 139.178.68.195 port 37228 Mar 19 13:03:39.269747 sshd-session[1844]: pam_unix(sshd:session): session closed for user core Mar 19 13:03:39.275408 systemd-logind[1505]: Session 5 logged out. Waiting for processes to exit. Mar 19 13:03:39.276391 systemd[1]: sshd@4-157.180.44.40:22-139.178.68.195:37228.service: Deactivated successfully. Mar 19 13:03:39.279040 systemd[1]: session-5.scope: Deactivated successfully. Mar 19 13:03:39.280391 systemd-logind[1505]: Removed session 5. Mar 19 13:03:39.442395 systemd[1]: Started sshd@5-157.180.44.40:22-139.178.68.195:37240.service - OpenSSH per-connection server daemon (139.178.68.195:37240). Mar 19 13:03:40.315434 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Mar 19 13:03:40.321158 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 13:03:40.418075 sshd[1853]: Accepted publickey for core from 139.178.68.195 port 37240 ssh2: RSA SHA256:L15lBNHhpcYMbNb2J6/f4OVxa0+buv89ULzmbi6dY0s Mar 19 13:03:40.419761 sshd-session[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 13:03:40.428873 systemd-logind[1505]: New session 6 of user core. Mar 19 13:03:40.437205 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 19 13:03:40.472816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 13:03:40.492659 (kubelet)[1864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 13:03:40.553229 kubelet[1864]: E0319 13:03:40.553150 1864 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 13:03:40.556292 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 13:03:40.556467 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 13:03:40.557063 systemd[1]: kubelet.service: Consumed 190ms CPU time, 101.2M memory peak. Mar 19 13:03:40.935382 sudo[1872]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 19 13:03:40.935715 sudo[1872]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 13:03:40.940200 sudo[1872]: pam_unix(sudo:session): session closed for user root Mar 19 13:03:40.946602 sudo[1871]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 19 13:03:40.946983 sudo[1871]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 13:03:40.962452 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 13:03:40.995399 augenrules[1894]: No rules Mar 19 13:03:40.996088 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 13:03:40.996344 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 13:03:40.997560 sudo[1871]: pam_unix(sudo:session): session closed for user root Mar 19 13:03:41.154660 sshd[1860]: Connection closed by 139.178.68.195 port 37240 Mar 19 13:03:41.155337 sshd-session[1853]: pam_unix(sshd:session): session closed for user core Mar 19 13:03:41.158700 systemd[1]: sshd@5-157.180.44.40:22-139.178.68.195:37240.service: Deactivated successfully. Mar 19 13:03:41.160849 systemd[1]: session-6.scope: Deactivated successfully. Mar 19 13:03:41.162981 systemd-logind[1505]: Session 6 logged out. Waiting for processes to exit. Mar 19 13:03:41.164291 systemd-logind[1505]: Removed session 6. Mar 19 13:03:41.360421 systemd[1]: Started sshd@6-157.180.44.40:22-139.178.68.195:37254.service - OpenSSH per-connection server daemon (139.178.68.195:37254). Mar 19 13:03:42.429384 sshd[1903]: Accepted publickey for core from 139.178.68.195 port 37254 ssh2: RSA SHA256:L15lBNHhpcYMbNb2J6/f4OVxa0+buv89ULzmbi6dY0s Mar 19 13:03:42.431324 sshd-session[1903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 13:03:42.440370 systemd-logind[1505]: New session 7 of user core. Mar 19 13:03:42.447088 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 19 13:03:42.992311 sudo[1906]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 19 13:03:42.992632 sudo[1906]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 13:03:43.751437 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 13:03:43.752578 systemd[1]: kubelet.service: Consumed 190ms CPU time, 101.2M memory peak. Mar 19 13:03:43.767477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 13:03:43.808983 systemd[1]: Reload requested from client PID 1940 ('systemctl') (unit session-7.scope)... Mar 19 13:03:43.809010 systemd[1]: Reloading... Mar 19 13:03:43.917908 zram_generator::config[1984]: No configuration found. Mar 19 13:03:44.024068 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 13:03:44.134777 systemd[1]: Reloading finished in 325 ms. Mar 19 13:03:44.180261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 13:03:44.183480 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 13:03:44.189382 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 13:03:44.189553 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 13:03:44.189589 systemd[1]: kubelet.service: Consumed 110ms CPU time, 91.7M memory peak. Mar 19 13:03:44.202205 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 13:03:44.325947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 13:03:44.336993 (kubelet)[2040]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 13:03:44.384337 kubelet[2040]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 13:03:44.384337 kubelet[2040]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 19 13:03:44.384337 kubelet[2040]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 13:03:44.384831 kubelet[2040]: I0319 13:03:44.384398 2040 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 13:03:45.081586 kubelet[2040]: I0319 13:03:45.081528 2040 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 19 13:03:45.082301 kubelet[2040]: I0319 13:03:45.081740 2040 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 13:03:45.082301 kubelet[2040]: I0319 13:03:45.082025 2040 server.go:954] "Client rotation is on, will bootstrap in background" Mar 19 13:03:45.113220 kubelet[2040]: I0319 13:03:45.112538 2040 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 13:03:45.124021 kubelet[2040]: E0319 13:03:45.123976 2040 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 19 13:03:45.124389 kubelet[2040]: I0319 13:03:45.124336 2040 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 19 13:03:45.128828 kubelet[2040]: I0319 13:03:45.128780 2040 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 13:03:45.131401 kubelet[2040]: I0319 13:03:45.131316 2040 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 13:03:45.131619 kubelet[2040]: I0319 13:03:45.131386 2040 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 13:03:45.131619 kubelet[2040]: I0319 13:03:45.131612 2040 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 13:03:45.131787 kubelet[2040]: I0319 13:03:45.131626 2040 container_manager_linux.go:304] "Creating device plugin manager" Mar 19 13:03:45.131817 kubelet[2040]: I0319 13:03:45.131794 2040 state_mem.go:36] "Initialized new in-memory state store" Mar 19 13:03:45.137408 kubelet[2040]: I0319 13:03:45.137346 2040 kubelet.go:446] "Attempting to sync node with API server" Mar 19 13:03:45.137408 kubelet[2040]: I0319 13:03:45.137386 2040 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 13:03:45.137408 kubelet[2040]: I0319 13:03:45.137410 2040 kubelet.go:352] "Adding apiserver pod source" Mar 19 13:03:45.137408 kubelet[2040]: I0319 13:03:45.137421 2040 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 13:03:45.142310 kubelet[2040]: E0319 13:03:45.142055 2040 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:03:45.142310 kubelet[2040]: E0319 13:03:45.142255 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:03:45.143518 kubelet[2040]: I0319 13:03:45.142827 2040 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 13:03:45.143518 kubelet[2040]: I0319 13:03:45.143365 2040 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 13:03:45.144383 kubelet[2040]: W0319 13:03:45.144184 2040 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 19 13:03:45.147478 kubelet[2040]: I0319 13:03:45.147427 2040 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 19 13:03:45.147601 kubelet[2040]: I0319 13:03:45.147491 2040 server.go:1287] "Started kubelet" Mar 19 13:03:45.147812 kubelet[2040]: I0319 13:03:45.147745 2040 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 13:03:45.149100 kubelet[2040]: I0319 13:03:45.149057 2040 server.go:490] "Adding debug handlers to kubelet server" Mar 19 13:03:45.152248 kubelet[2040]: I0319 13:03:45.151797 2040 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 13:03:45.152248 kubelet[2040]: I0319 13:03:45.152024 2040 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 13:03:45.152423 kubelet[2040]: I0319 13:03:45.152380 2040 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 13:03:45.153369 kubelet[2040]: W0319 13:03:45.153334 2040 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 19 13:03:45.153441 kubelet[2040]: E0319 13:03:45.153386 2040 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 19 13:03:45.153554 kubelet[2040]: W0319 13:03:45.153524 2040 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.4" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 19 13:03:45.153554 kubelet[2040]: E0319 13:03:45.153547 2040 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.4\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 19 13:03:45.160131 kubelet[2040]: E0319 13:03:45.157527 2040 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.182e35f35a4fa499 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2025-03-19 13:03:45.147454617 +0000 UTC m=+0.806386202,LastTimestamp:2025-03-19 13:03:45.147454617 +0000 UTC m=+0.806386202,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Mar 19 13:03:45.160131 kubelet[2040]: I0319 13:03:45.159468 2040 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 19 13:03:45.162011 kubelet[2040]: E0319 13:03:45.161753 2040 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 13:03:45.163601 kubelet[2040]: E0319 13:03:45.163575 2040 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 19 13:03:45.163601 kubelet[2040]: I0319 13:03:45.163609 2040 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 19 13:03:45.163961 kubelet[2040]: I0319 13:03:45.163815 2040 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 19 13:03:45.163961 kubelet[2040]: I0319 13:03:45.163873 2040 reconciler.go:26] "Reconciler: start to sync state" Mar 19 13:03:45.164651 kubelet[2040]: I0319 13:03:45.164625 2040 factory.go:221] Registration of the systemd container factory successfully Mar 19 13:03:45.166682 kubelet[2040]: I0319 13:03:45.164724 2040 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 13:03:45.167703 kubelet[2040]: I0319 13:03:45.167687 2040 factory.go:221] Registration of the containerd container factory successfully Mar 19 13:03:45.187767 kubelet[2040]: E0319 13:03:45.187699 2040 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.4\" not found" node="10.0.0.4" Mar 19 13:03:45.197941 kubelet[2040]: I0319 13:03:45.197603 2040 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 19 13:03:45.197941 kubelet[2040]: I0319 13:03:45.197624 2040 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 19 13:03:45.197941 kubelet[2040]: I0319 13:03:45.197650 2040 state_mem.go:36] "Initialized new in-memory state store" Mar 19 13:03:45.201434 kubelet[2040]: I0319 13:03:45.201375 2040 policy_none.go:49] "None policy: Start" Mar 19 13:03:45.201434 kubelet[2040]: I0319 13:03:45.201413 2040 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 19 13:03:45.201434 kubelet[2040]: I0319 13:03:45.201429 2040 state_mem.go:35] "Initializing new in-memory state store" Mar 19 13:03:45.211244 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 19 13:03:45.223672 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 19 13:03:45.228524 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 19 13:03:45.240812 kubelet[2040]: I0319 13:03:45.235250 2040 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 13:03:45.240812 kubelet[2040]: I0319 13:03:45.235508 2040 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 13:03:45.240812 kubelet[2040]: I0319 13:03:45.235521 2040 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 13:03:45.240812 kubelet[2040]: I0319 13:03:45.236347 2040 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 13:03:45.242400 kubelet[2040]: E0319 13:03:45.242360 2040 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 19 13:03:45.242467 kubelet[2040]: E0319 13:03:45.242417 2040 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.4\" not found" Mar 19 13:03:45.248653 kubelet[2040]: I0319 13:03:45.248575 2040 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 13:03:45.250054 kubelet[2040]: I0319 13:03:45.250013 2040 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 13:03:45.250054 kubelet[2040]: I0319 13:03:45.250043 2040 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 19 13:03:45.250178 kubelet[2040]: I0319 13:03:45.250070 2040 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 19 13:03:45.250178 kubelet[2040]: I0319 13:03:45.250083 2040 kubelet.go:2388] "Starting kubelet main sync loop" Mar 19 13:03:45.250234 kubelet[2040]: E0319 13:03:45.250212 2040 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 19 13:03:45.337529 kubelet[2040]: I0319 13:03:45.337029 2040 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.4" Mar 19 13:03:45.347316 kubelet[2040]: I0319 13:03:45.347256 2040 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.4" Mar 19 13:03:45.347316 kubelet[2040]: E0319 13:03:45.347323 2040 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.4\": node \"10.0.0.4\" not found" Mar 19 13:03:45.359614 kubelet[2040]: E0319 13:03:45.359566 2040 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 19 13:03:45.385718 sudo[1906]: pam_unix(sudo:session): session closed for user root Mar 19 13:03:45.460539 kubelet[2040]: E0319 13:03:45.460471 2040 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 19 13:03:45.558727 sshd[1905]: Connection closed by 139.178.68.195 port 37254 Mar 19 13:03:45.559416 sshd-session[1903]: pam_unix(sshd:session): session closed for user core Mar 19 13:03:45.561107 kubelet[2040]: E0319 13:03:45.561010 2040 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 19 13:03:45.563631 systemd[1]: sshd@6-157.180.44.40:22-139.178.68.195:37254.service: Deactivated successfully. Mar 19 13:03:45.565866 systemd[1]: session-7.scope: Deactivated successfully. Mar 19 13:03:45.566171 systemd[1]: session-7.scope: Consumed 605ms CPU time, 75M memory peak. Mar 19 13:03:45.567498 systemd-logind[1505]: Session 7 logged out. Waiting for processes to exit. Mar 19 13:03:45.569263 systemd-logind[1505]: Removed session 7. Mar 19 13:03:45.661833 kubelet[2040]: E0319 13:03:45.661657 2040 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 19 13:03:45.762556 kubelet[2040]: E0319 13:03:45.762404 2040 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 19 13:03:45.863613 kubelet[2040]: E0319 13:03:45.863545 2040 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 19 13:03:45.964560 kubelet[2040]: E0319 13:03:45.964412 2040 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 19 13:03:46.065006 kubelet[2040]: E0319 13:03:46.064945 2040 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 19 13:03:46.084322 kubelet[2040]: I0319 13:03:46.084097 2040 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 19 13:03:46.084527 kubelet[2040]: W0319 13:03:46.084380 2040 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 19 13:03:46.084527 kubelet[2040]: W0319 13:03:46.084420 2040 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 19 13:03:46.143127 kubelet[2040]: E0319 13:03:46.143058 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:03:46.165099 kubelet[2040]: E0319 13:03:46.165044 2040 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 19 13:03:46.266503 kubelet[2040]: E0319 13:03:46.266226 2040 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 19 13:03:46.367346 kubelet[2040]: E0319 13:03:46.367219 2040 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 19 13:03:46.467610 kubelet[2040]: E0319 13:03:46.467550 2040 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 19 13:03:46.568686 kubelet[2040]: E0319 13:03:46.568634 2040 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 19 13:03:46.669196 kubelet[2040]: E0319 13:03:46.669133 2040 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 19 13:03:46.771596 kubelet[2040]: I0319 13:03:46.771498 2040 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 19 13:03:46.772176 containerd[1526]: time="2025-03-19T13:03:46.772093050Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 19 13:03:46.773074 kubelet[2040]: I0319 13:03:46.772534 2040 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 19 13:03:47.140709 kubelet[2040]: I0319 13:03:47.140647 2040 apiserver.go:52] "Watching apiserver" Mar 19 13:03:47.143955 kubelet[2040]: E0319 13:03:47.143869 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:03:47.145778 kubelet[2040]: E0319 13:03:47.145257 2040 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6pv8" podUID="6d293349-d5c7-4dde-8dc5-60732203edc5" Mar 19 13:03:47.151995 systemd[1]: Created slice kubepods-besteffort-pod886b5fef_f50a_4be9_ab59_7421710e09f6.slice - libcontainer container kubepods-besteffort-pod886b5fef_f50a_4be9_ab59_7421710e09f6.slice. Mar 19 13:03:47.164781 kubelet[2040]: I0319 13:03:47.164715 2040 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 19 13:03:47.164932 systemd[1]: Created slice kubepods-besteffort-pode8927586_f829_4440_a1fa_1831e5d29bbd.slice - libcontainer container kubepods-besteffort-pode8927586_f829_4440_a1fa_1831e5d29bbd.slice. Mar 19 13:03:47.174038 kubelet[2040]: I0319 13:03:47.173997 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8927586-f829-4440-a1fa-1831e5d29bbd-tigera-ca-bundle\") pod \"calico-node-9qnkv\" (UID: \"e8927586-f829-4440-a1fa-1831e5d29bbd\") " pod="calico-system/calico-node-9qnkv" Mar 19 13:03:47.174038 kubelet[2040]: I0319 13:03:47.174039 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e8927586-f829-4440-a1fa-1831e5d29bbd-var-run-calico\") pod \"calico-node-9qnkv\" (UID: \"e8927586-f829-4440-a1fa-1831e5d29bbd\") " pod="calico-system/calico-node-9qnkv" Mar 19 13:03:47.174285 kubelet[2040]: I0319 13:03:47.174060 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6d293349-d5c7-4dde-8dc5-60732203edc5-varrun\") pod \"csi-node-driver-k6pv8\" (UID: \"6d293349-d5c7-4dde-8dc5-60732203edc5\") " pod="calico-system/csi-node-driver-k6pv8" Mar 19 13:03:47.174285 kubelet[2040]: I0319 13:03:47.174081 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/886b5fef-f50a-4be9-ab59-7421710e09f6-kube-proxy\") pod \"kube-proxy-ghnpj\" (UID: \"886b5fef-f50a-4be9-ab59-7421710e09f6\") " pod="kube-system/kube-proxy-ghnpj" Mar 19 13:03:47.174285 kubelet[2040]: I0319 13:03:47.174099 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln5tb\" (UniqueName: \"kubernetes.io/projected/886b5fef-f50a-4be9-ab59-7421710e09f6-kube-api-access-ln5tb\") pod \"kube-proxy-ghnpj\" (UID: \"886b5fef-f50a-4be9-ab59-7421710e09f6\") " pod="kube-system/kube-proxy-ghnpj" Mar 19 13:03:47.174285 kubelet[2040]: I0319 13:03:47.174120 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8927586-f829-4440-a1fa-1831e5d29bbd-xtables-lock\") pod \"calico-node-9qnkv\" (UID: \"e8927586-f829-4440-a1fa-1831e5d29bbd\") " pod="calico-system/calico-node-9qnkv" Mar 19 13:03:47.174285 kubelet[2040]: I0319 13:03:47.174138 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e8927586-f829-4440-a1fa-1831e5d29bbd-var-lib-calico\") pod \"calico-node-9qnkv\" (UID: \"e8927586-f829-4440-a1fa-1831e5d29bbd\") " pod="calico-system/calico-node-9qnkv" Mar 19 13:03:47.174387 kubelet[2040]: I0319 13:03:47.174156 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e8927586-f829-4440-a1fa-1831e5d29bbd-cni-net-dir\") pod \"calico-node-9qnkv\" (UID: \"e8927586-f829-4440-a1fa-1831e5d29bbd\") " pod="calico-system/calico-node-9qnkv" Mar 19 13:03:47.174387 kubelet[2040]: I0319 13:03:47.174183 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e8927586-f829-4440-a1fa-1831e5d29bbd-flexvol-driver-host\") pod \"calico-node-9qnkv\" (UID: \"e8927586-f829-4440-a1fa-1831e5d29bbd\") " pod="calico-system/calico-node-9qnkv" Mar 19 13:03:47.174387 kubelet[2040]: I0319 13:03:47.174202 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4jdn\" (UniqueName: \"kubernetes.io/projected/e8927586-f829-4440-a1fa-1831e5d29bbd-kube-api-access-c4jdn\") pod \"calico-node-9qnkv\" (UID: \"e8927586-f829-4440-a1fa-1831e5d29bbd\") " pod="calico-system/calico-node-9qnkv" Mar 19 13:03:47.174387 kubelet[2040]: I0319 13:03:47.174220 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d293349-d5c7-4dde-8dc5-60732203edc5-kubelet-dir\") pod \"csi-node-driver-k6pv8\" (UID: \"6d293349-d5c7-4dde-8dc5-60732203edc5\") " pod="calico-system/csi-node-driver-k6pv8" Mar 19 13:03:47.174387 kubelet[2040]: I0319 13:03:47.174239 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/886b5fef-f50a-4be9-ab59-7421710e09f6-lib-modules\") pod \"kube-proxy-ghnpj\" (UID: \"886b5fef-f50a-4be9-ab59-7421710e09f6\") " pod="kube-system/kube-proxy-ghnpj" Mar 19 13:03:47.174487 kubelet[2040]: I0319 13:03:47.174264 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8927586-f829-4440-a1fa-1831e5d29bbd-lib-modules\") pod \"calico-node-9qnkv\" (UID: \"e8927586-f829-4440-a1fa-1831e5d29bbd\") " pod="calico-system/calico-node-9qnkv" Mar 19 13:03:47.174487 kubelet[2040]: I0319 13:03:47.174296 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e8927586-f829-4440-a1fa-1831e5d29bbd-policysync\") pod \"calico-node-9qnkv\" (UID: \"e8927586-f829-4440-a1fa-1831e5d29bbd\") " pod="calico-system/calico-node-9qnkv" Mar 19 13:03:47.174487 kubelet[2040]: I0319 13:03:47.174316 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e8927586-f829-4440-a1fa-1831e5d29bbd-node-certs\") pod \"calico-node-9qnkv\" (UID: \"e8927586-f829-4440-a1fa-1831e5d29bbd\") " pod="calico-system/calico-node-9qnkv" Mar 19 13:03:47.174487 kubelet[2040]: I0319 13:03:47.174336 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e8927586-f829-4440-a1fa-1831e5d29bbd-cni-bin-dir\") pod \"calico-node-9qnkv\" (UID: \"e8927586-f829-4440-a1fa-1831e5d29bbd\") " pod="calico-system/calico-node-9qnkv" Mar 19 13:03:47.174487 kubelet[2040]: I0319 13:03:47.174357 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6d293349-d5c7-4dde-8dc5-60732203edc5-socket-dir\") pod \"csi-node-driver-k6pv8\" (UID: \"6d293349-d5c7-4dde-8dc5-60732203edc5\") " pod="calico-system/csi-node-driver-k6pv8" Mar 19 13:03:47.174587 kubelet[2040]: I0319 13:03:47.174377 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e8927586-f829-4440-a1fa-1831e5d29bbd-cni-log-dir\") pod \"calico-node-9qnkv\" (UID: \"e8927586-f829-4440-a1fa-1831e5d29bbd\") " pod="calico-system/calico-node-9qnkv" Mar 19 13:03:47.174587 kubelet[2040]: I0319 13:03:47.174394 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6d293349-d5c7-4dde-8dc5-60732203edc5-registration-dir\") pod \"csi-node-driver-k6pv8\" (UID: \"6d293349-d5c7-4dde-8dc5-60732203edc5\") " pod="calico-system/csi-node-driver-k6pv8" Mar 19 13:03:47.174587 kubelet[2040]: I0319 13:03:47.174412 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkb57\" (UniqueName: \"kubernetes.io/projected/6d293349-d5c7-4dde-8dc5-60732203edc5-kube-api-access-jkb57\") pod \"csi-node-driver-k6pv8\" (UID: \"6d293349-d5c7-4dde-8dc5-60732203edc5\") " pod="calico-system/csi-node-driver-k6pv8" Mar 19 13:03:47.174587 kubelet[2040]: I0319 13:03:47.174429 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/886b5fef-f50a-4be9-ab59-7421710e09f6-xtables-lock\") pod \"kube-proxy-ghnpj\" (UID: \"886b5fef-f50a-4be9-ab59-7421710e09f6\") " pod="kube-system/kube-proxy-ghnpj" Mar 19 13:03:47.281918 kubelet[2040]: E0319 13:03:47.280087 2040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 13:03:47.281918 kubelet[2040]: W0319 13:03:47.280145 2040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 13:03:47.281918 kubelet[2040]: E0319 13:03:47.280175 2040 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 13:03:47.281918 kubelet[2040]: E0319 13:03:47.280601 2040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 13:03:47.281918 kubelet[2040]: W0319 13:03:47.280612 2040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 13:03:47.281918 kubelet[2040]: E0319 13:03:47.280669 2040 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 13:03:47.281918 kubelet[2040]: E0319 13:03:47.280879 2040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 13:03:47.281918 kubelet[2040]: W0319 13:03:47.280955 2040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 13:03:47.281918 kubelet[2040]: E0319 13:03:47.281033 2040 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 13:03:47.281918 kubelet[2040]: E0319 13:03:47.281347 2040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 13:03:47.282368 kubelet[2040]: W0319 13:03:47.281360 2040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 13:03:47.282368 kubelet[2040]: E0319 13:03:47.281512 2040 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 13:03:47.282368 kubelet[2040]: E0319 13:03:47.281753 2040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 13:03:47.282368 kubelet[2040]: W0319 13:03:47.281764 2040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 13:03:47.282368 kubelet[2040]: E0319 13:03:47.281781 2040 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 13:03:47.282368 kubelet[2040]: E0319 13:03:47.282076 2040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 13:03:47.282368 kubelet[2040]: W0319 13:03:47.282090 2040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 13:03:47.282368 kubelet[2040]: E0319 13:03:47.282103 2040 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 13:03:47.282368 kubelet[2040]: E0319 13:03:47.282305 2040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 13:03:47.282368 kubelet[2040]: W0319 13:03:47.282316 2040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 13:03:47.282660 kubelet[2040]: E0319 13:03:47.282326 2040 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 13:03:47.282660 kubelet[2040]: E0319 13:03:47.282497 2040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 13:03:47.282660 kubelet[2040]: W0319 13:03:47.282507 2040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 13:03:47.282660 kubelet[2040]: E0319 13:03:47.282518 2040 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 13:03:47.285304 kubelet[2040]: E0319 13:03:47.282812 2040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 13:03:47.285304 kubelet[2040]: W0319 13:03:47.282828 2040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 13:03:47.285304 kubelet[2040]: E0319 13:03:47.282838 2040 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 13:03:47.297094 kubelet[2040]: E0319 13:03:47.297034 2040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 13:03:47.297094 kubelet[2040]: W0319 13:03:47.297067 2040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 13:03:47.297094 kubelet[2040]: E0319 13:03:47.297093 2040 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 13:03:47.307920 kubelet[2040]: E0319 13:03:47.301992 2040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 13:03:47.307920 kubelet[2040]: W0319 13:03:47.302022 2040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 13:03:47.307920 kubelet[2040]: E0319 13:03:47.302045 2040 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 13:03:47.309536 kubelet[2040]: E0319 13:03:47.309495 2040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 13:03:47.309536 kubelet[2040]: W0319 13:03:47.309525 2040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 13:03:47.309682 kubelet[2040]: E0319 13:03:47.309548 2040 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 13:03:47.463533 containerd[1526]: time="2025-03-19T13:03:47.463352707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ghnpj,Uid:886b5fef-f50a-4be9-ab59-7421710e09f6,Namespace:kube-system,Attempt:0,}" Mar 19 13:03:47.469511 containerd[1526]: time="2025-03-19T13:03:47.469052847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9qnkv,Uid:e8927586-f829-4440-a1fa-1831e5d29bbd,Namespace:calico-system,Attempt:0,}" Mar 19 13:03:48.032205 containerd[1526]: time="2025-03-19T13:03:48.032048386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 13:03:48.034704 containerd[1526]: time="2025-03-19T13:03:48.034571146Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 13:03:48.036765 containerd[1526]: time="2025-03-19T13:03:48.036419672Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Mar 19 13:03:48.038099 containerd[1526]: time="2025-03-19T13:03:48.038038639Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 13:03:48.039689 containerd[1526]: time="2025-03-19T13:03:48.039601811Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 13:03:48.043554 containerd[1526]: time="2025-03-19T13:03:48.043446972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 13:03:48.044920 containerd[1526]: time="2025-03-19T13:03:48.044427811Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 580.949588ms" Mar 19 13:03:48.047662 containerd[1526]: time="2025-03-19T13:03:48.047592486Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 578.344052ms" Mar 19 13:03:48.145245 kubelet[2040]: E0319 13:03:48.144902 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:03:48.154015 containerd[1526]: time="2025-03-19T13:03:48.153476627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 13:03:48.154163 containerd[1526]: time="2025-03-19T13:03:48.154067905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 13:03:48.154193 containerd[1526]: time="2025-03-19T13:03:48.154151662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 13:03:48.154453 containerd[1526]: time="2025-03-19T13:03:48.154335508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 13:03:48.156304 containerd[1526]: time="2025-03-19T13:03:48.156110316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 13:03:48.156304 containerd[1526]: time="2025-03-19T13:03:48.156214011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 13:03:48.156304 containerd[1526]: time="2025-03-19T13:03:48.156228288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 13:03:48.156667 containerd[1526]: time="2025-03-19T13:03:48.156610324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 13:03:48.237109 systemd[1]: Started cri-containerd-358674ba06efb69118b61af5d5c77b1f7a62479dd6c4cf6ac087b33161228142.scope - libcontainer container 358674ba06efb69118b61af5d5c77b1f7a62479dd6c4cf6ac087b33161228142. Mar 19 13:03:48.238561 systemd[1]: Started cri-containerd-8e0c3aa7b2d20251a5187fb61d0867b3a27f76571db3c0abdc317bcbd968365e.scope - libcontainer container 8e0c3aa7b2d20251a5187fb61d0867b3a27f76571db3c0abdc317bcbd968365e. Mar 19 13:03:48.271187 containerd[1526]: time="2025-03-19T13:03:48.271147402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9qnkv,Uid:e8927586-f829-4440-a1fa-1831e5d29bbd,Namespace:calico-system,Attempt:0,} returns sandbox id \"358674ba06efb69118b61af5d5c77b1f7a62479dd6c4cf6ac087b33161228142\"" Mar 19 13:03:48.274481 containerd[1526]: time="2025-03-19T13:03:48.273865740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ghnpj,Uid:886b5fef-f50a-4be9-ab59-7421710e09f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e0c3aa7b2d20251a5187fb61d0867b3a27f76571db3c0abdc317bcbd968365e\"" Mar 19 13:03:48.275071 containerd[1526]: time="2025-03-19T13:03:48.274818908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 19 13:03:48.284850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1054236132.mount: Deactivated successfully. Mar 19 13:03:49.145123 kubelet[2040]: E0319 13:03:49.145062 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:03:49.251020 kubelet[2040]: E0319 13:03:49.250532 2040 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6pv8" podUID="6d293349-d5c7-4dde-8dc5-60732203edc5" Mar 19 13:03:50.145624 kubelet[2040]: E0319 13:03:50.145556 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:03:50.296616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3526368782.mount: Deactivated successfully. Mar 19 13:03:50.389078 containerd[1526]: time="2025-03-19T13:03:50.388986961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:03:50.390438 containerd[1526]: time="2025-03-19T13:03:50.390230473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=6857253" Mar 19 13:03:50.393124 containerd[1526]: time="2025-03-19T13:03:50.391759461Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:03:50.395594 containerd[1526]: time="2025-03-19T13:03:50.394380286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:03:50.395594 containerd[1526]: time="2025-03-19T13:03:50.395021177Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 2.120166092s" Mar 19 13:03:50.395594 containerd[1526]: time="2025-03-19T13:03:50.395067955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 19 13:03:50.396792 containerd[1526]: time="2025-03-19T13:03:50.396712800Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 19 13:03:50.398871 containerd[1526]: time="2025-03-19T13:03:50.398823347Z" level=info msg="CreateContainer within sandbox \"358674ba06efb69118b61af5d5c77b1f7a62479dd6c4cf6ac087b33161228142\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 19 13:03:50.421061 containerd[1526]: time="2025-03-19T13:03:50.420986951Z" level=info msg="CreateContainer within sandbox \"358674ba06efb69118b61af5d5c77b1f7a62479dd6c4cf6ac087b33161228142\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1abe4a7cdaf151dd486495bc317cd5f4821a83cfba5f4dfd6d6de6214c92d9cb\"" Mar 19 13:03:50.422041 containerd[1526]: time="2025-03-19T13:03:50.421931222Z" level=info msg="StartContainer for \"1abe4a7cdaf151dd486495bc317cd5f4821a83cfba5f4dfd6d6de6214c92d9cb\"" Mar 19 13:03:50.459173 systemd[1]: Started cri-containerd-1abe4a7cdaf151dd486495bc317cd5f4821a83cfba5f4dfd6d6de6214c92d9cb.scope - libcontainer container 1abe4a7cdaf151dd486495bc317cd5f4821a83cfba5f4dfd6d6de6214c92d9cb. Mar 19 13:03:50.497941 containerd[1526]: time="2025-03-19T13:03:50.497854841Z" level=info msg="StartContainer for \"1abe4a7cdaf151dd486495bc317cd5f4821a83cfba5f4dfd6d6de6214c92d9cb\" returns successfully" Mar 19 13:03:50.507633 systemd[1]: cri-containerd-1abe4a7cdaf151dd486495bc317cd5f4821a83cfba5f4dfd6d6de6214c92d9cb.scope: Deactivated successfully. Mar 19 13:03:50.569665 containerd[1526]: time="2025-03-19T13:03:50.569588492Z" level=info msg="shim disconnected" id=1abe4a7cdaf151dd486495bc317cd5f4821a83cfba5f4dfd6d6de6214c92d9cb namespace=k8s.io Mar 19 13:03:50.569872 containerd[1526]: time="2025-03-19T13:03:50.569690293Z" level=warning msg="cleaning up after shim disconnected" id=1abe4a7cdaf151dd486495bc317cd5f4821a83cfba5f4dfd6d6de6214c92d9cb namespace=k8s.io Mar 19 13:03:50.569872 containerd[1526]: time="2025-03-19T13:03:50.569703768Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 13:03:51.146531 kubelet[2040]: E0319 13:03:51.146384 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:03:51.255409 kubelet[2040]: E0319 13:03:51.254979 2040 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6pv8" podUID="6d293349-d5c7-4dde-8dc5-60732203edc5" Mar 19 13:03:51.268461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1abe4a7cdaf151dd486495bc317cd5f4821a83cfba5f4dfd6d6de6214c92d9cb-rootfs.mount: Deactivated successfully. Mar 19 13:03:51.459033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount883236154.mount: Deactivated successfully. Mar 19 13:03:51.802811 containerd[1526]: time="2025-03-19T13:03:51.802651811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:03:51.804609 containerd[1526]: time="2025-03-19T13:03:51.804556653Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=30918213" Mar 19 13:03:51.806237 containerd[1526]: time="2025-03-19T13:03:51.806182743Z" level=info msg="ImageCreate event name:\"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:03:51.809267 containerd[1526]: time="2025-03-19T13:03:51.809156068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:03:51.809919 containerd[1526]: time="2025-03-19T13:03:51.809485637Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"30917204\" in 1.412601676s" Mar 19 13:03:51.809919 containerd[1526]: time="2025-03-19T13:03:51.809527245Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\"" Mar 19 13:03:51.810909 containerd[1526]: time="2025-03-19T13:03:51.810855786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 19 13:03:51.812590 containerd[1526]: time="2025-03-19T13:03:51.812540305Z" level=info msg="CreateContainer within sandbox \"8e0c3aa7b2d20251a5187fb61d0867b3a27f76571db3c0abdc317bcbd968365e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 19 13:03:51.831623 containerd[1526]: time="2025-03-19T13:03:51.831561965Z" level=info msg="CreateContainer within sandbox \"8e0c3aa7b2d20251a5187fb61d0867b3a27f76571db3c0abdc317bcbd968365e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ac4de0cddf7a846d4b9d78e8b36856eab8c32f3384e6be5df0fbababd3c7217f\"" Mar 19 13:03:51.832291 containerd[1526]: time="2025-03-19T13:03:51.832256818Z" level=info msg="StartContainer for \"ac4de0cddf7a846d4b9d78e8b36856eab8c32f3384e6be5df0fbababd3c7217f\"" Mar 19 13:03:51.859119 systemd[1]: Started cri-containerd-ac4de0cddf7a846d4b9d78e8b36856eab8c32f3384e6be5df0fbababd3c7217f.scope - libcontainer container ac4de0cddf7a846d4b9d78e8b36856eab8c32f3384e6be5df0fbababd3c7217f. Mar 19 13:03:51.892408 containerd[1526]: time="2025-03-19T13:03:51.892354728Z" level=info msg="StartContainer for \"ac4de0cddf7a846d4b9d78e8b36856eab8c32f3384e6be5df0fbababd3c7217f\" returns successfully" Mar 19 13:03:52.146816 kubelet[2040]: E0319 13:03:52.146758 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:03:52.297320 kubelet[2040]: I0319 13:03:52.297100 2040 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ghnpj" podStartSLOduration=3.761865428 podStartE2EDuration="7.29708311s" podCreationTimestamp="2025-03-19 13:03:45 +0000 UTC" firstStartedPulling="2025-03-19 13:03:48.275552703 +0000 UTC m=+3.934484268" lastFinishedPulling="2025-03-19 13:03:51.810770386 +0000 UTC m=+7.469701950" observedRunningTime="2025-03-19 13:03:52.29699239 +0000 UTC m=+7.955923975" watchObservedRunningTime="2025-03-19 13:03:52.29708311 +0000 UTC m=+7.956014675" Mar 19 13:03:53.148167 kubelet[2040]: E0319 13:03:53.148053 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:03:53.251632 kubelet[2040]: E0319 13:03:53.251186 2040 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6pv8" podUID="6d293349-d5c7-4dde-8dc5-60732203edc5" Mar 19 13:03:54.148483 kubelet[2040]: E0319 13:03:54.148420 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:03:55.149512 kubelet[2040]: E0319 13:03:55.149416 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:03:55.251559 kubelet[2040]: E0319 13:03:55.251191 2040 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6pv8" podUID="6d293349-d5c7-4dde-8dc5-60732203edc5" Mar 19 13:03:56.150030 kubelet[2040]: E0319 13:03:56.149824 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:03:57.111320 containerd[1526]: time="2025-03-19T13:03:57.111219452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:03:57.112381 containerd[1526]: time="2025-03-19T13:03:57.112318236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 19 13:03:57.113542 containerd[1526]: time="2025-03-19T13:03:57.113489988Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:03:57.115840 containerd[1526]: time="2025-03-19T13:03:57.115772035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:03:57.116736 containerd[1526]: time="2025-03-19T13:03:57.116392105Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 5.305502306s" Mar 19 13:03:57.116736 containerd[1526]: time="2025-03-19T13:03:57.116436328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 19 13:03:57.119061 containerd[1526]: time="2025-03-19T13:03:57.119011043Z" level=info msg="CreateContainer within sandbox \"358674ba06efb69118b61af5d5c77b1f7a62479dd6c4cf6ac087b33161228142\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 19 13:03:57.134234 containerd[1526]: time="2025-03-19T13:03:57.134181783Z" level=info msg="CreateContainer within sandbox \"358674ba06efb69118b61af5d5c77b1f7a62479dd6c4cf6ac087b33161228142\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cd82546f2d5db56bb41a263a47eaa85b2b73ea9d1a0ce408954975793a0b006d\"" Mar 19 13:03:57.135125 containerd[1526]: time="2025-03-19T13:03:57.135063652Z" level=info msg="StartContainer for \"cd82546f2d5db56bb41a263a47eaa85b2b73ea9d1a0ce408954975793a0b006d\"" Mar 19 13:03:57.151972 kubelet[2040]: E0319 13:03:57.150968 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:03:57.168133 systemd[1]: Started cri-containerd-cd82546f2d5db56bb41a263a47eaa85b2b73ea9d1a0ce408954975793a0b006d.scope - libcontainer container cd82546f2d5db56bb41a263a47eaa85b2b73ea9d1a0ce408954975793a0b006d. Mar 19 13:03:57.197602 containerd[1526]: time="2025-03-19T13:03:57.197277727Z" level=info msg="StartContainer for \"cd82546f2d5db56bb41a263a47eaa85b2b73ea9d1a0ce408954975793a0b006d\" returns successfully" Mar 19 13:03:57.252127 kubelet[2040]: E0319 13:03:57.252066 2040 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6pv8" podUID="6d293349-d5c7-4dde-8dc5-60732203edc5" Mar 19 13:03:57.697977 systemd[1]: cri-containerd-cd82546f2d5db56bb41a263a47eaa85b2b73ea9d1a0ce408954975793a0b006d.scope: Deactivated successfully. Mar 19 13:03:57.698482 systemd[1]: cri-containerd-cd82546f2d5db56bb41a263a47eaa85b2b73ea9d1a0ce408954975793a0b006d.scope: Consumed 553ms CPU time, 174.6M memory peak, 154M written to disk. Mar 19 13:03:57.718450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd82546f2d5db56bb41a263a47eaa85b2b73ea9d1a0ce408954975793a0b006d-rootfs.mount: Deactivated successfully. Mar 19 13:03:57.732800 kubelet[2040]: I0319 13:03:57.732343 2040 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 19 13:03:57.769323 containerd[1526]: time="2025-03-19T13:03:57.769171500Z" level=info msg="shim disconnected" id=cd82546f2d5db56bb41a263a47eaa85b2b73ea9d1a0ce408954975793a0b006d namespace=k8s.io Mar 19 13:03:57.769323 containerd[1526]: time="2025-03-19T13:03:57.769235330Z" level=warning msg="cleaning up after shim disconnected" id=cd82546f2d5db56bb41a263a47eaa85b2b73ea9d1a0ce408954975793a0b006d namespace=k8s.io Mar 19 13:03:57.769323 containerd[1526]: time="2025-03-19T13:03:57.769246100Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 13:03:58.152020 kubelet[2040]: E0319 13:03:58.151875 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:03:58.296755 containerd[1526]: time="2025-03-19T13:03:58.296705682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 19 13:03:59.153301 kubelet[2040]: E0319 13:03:59.153172 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:03:59.265584 systemd[1]: Created slice kubepods-besteffort-pod6d293349_d5c7_4dde_8dc5_60732203edc5.slice - libcontainer container kubepods-besteffort-pod6d293349_d5c7_4dde_8dc5_60732203edc5.slice. Mar 19 13:03:59.275291 containerd[1526]: time="2025-03-19T13:03:59.274154190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6pv8,Uid:6d293349-d5c7-4dde-8dc5-60732203edc5,Namespace:calico-system,Attempt:0,}" Mar 19 13:03:59.357953 containerd[1526]: time="2025-03-19T13:03:59.357848005Z" level=error msg="Failed to destroy network for sandbox \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:03:59.360230 containerd[1526]: time="2025-03-19T13:03:59.358667427Z" level=error msg="encountered an error cleaning up failed sandbox \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:03:59.360230 containerd[1526]: time="2025-03-19T13:03:59.358970214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6pv8,Uid:6d293349-d5c7-4dde-8dc5-60732203edc5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:03:59.360365 kubelet[2040]: E0319 13:03:59.359303 2040 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:03:59.360365 kubelet[2040]: E0319 13:03:59.359388 2040 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6pv8" Mar 19 13:03:59.360365 kubelet[2040]: E0319 13:03:59.359425 2040 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6pv8" Mar 19 13:03:59.360520 kubelet[2040]: E0319 13:03:59.359513 2040 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k6pv8_calico-system(6d293349-d5c7-4dde-8dc5-60732203edc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k6pv8_calico-system(6d293349-d5c7-4dde-8dc5-60732203edc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k6pv8" podUID="6d293349-d5c7-4dde-8dc5-60732203edc5" Mar 19 13:03:59.362431 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd-shm.mount: Deactivated successfully. Mar 19 13:04:00.153943 kubelet[2040]: E0319 13:04:00.153786 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:00.303406 kubelet[2040]: I0319 13:04:00.303331 2040 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd" Mar 19 13:04:00.304590 containerd[1526]: time="2025-03-19T13:04:00.304489872Z" level=info msg="StopPodSandbox for \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\"" Mar 19 13:04:00.304973 containerd[1526]: time="2025-03-19T13:04:00.304849775Z" level=info msg="Ensure that sandbox d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd in task-service has been cleanup successfully" Mar 19 13:04:00.307235 containerd[1526]: time="2025-03-19T13:04:00.307086799Z" level=info msg="TearDown network for sandbox \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\" successfully" Mar 19 13:04:00.307235 containerd[1526]: time="2025-03-19T13:04:00.307118278Z" level=info msg="StopPodSandbox for \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\" returns successfully" Mar 19 13:04:00.310002 containerd[1526]: time="2025-03-19T13:04:00.309373036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6pv8,Uid:6d293349-d5c7-4dde-8dc5-60732203edc5,Namespace:calico-system,Attempt:1,}" Mar 19 13:04:00.309606 systemd[1]: run-netns-cni\x2d30e04473\x2dae45\x2d228c\x2d6934\x2d815c02da7c58.mount: Deactivated successfully. Mar 19 13:04:00.397553 containerd[1526]: time="2025-03-19T13:04:00.397023230Z" level=error msg="Failed to destroy network for sandbox \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:00.397553 containerd[1526]: time="2025-03-19T13:04:00.397390487Z" level=error msg="encountered an error cleaning up failed sandbox \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:00.397553 containerd[1526]: time="2025-03-19T13:04:00.397459396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6pv8,Uid:6d293349-d5c7-4dde-8dc5-60732203edc5,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:00.399090 kubelet[2040]: E0319 13:04:00.399020 2040 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:00.399090 kubelet[2040]: E0319 13:04:00.399084 2040 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6pv8" Mar 19 13:04:00.399323 kubelet[2040]: E0319 13:04:00.399113 2040 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6pv8" Mar 19 13:04:00.399323 kubelet[2040]: E0319 13:04:00.399162 2040 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k6pv8_calico-system(6d293349-d5c7-4dde-8dc5-60732203edc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k6pv8_calico-system(6d293349-d5c7-4dde-8dc5-60732203edc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k6pv8" podUID="6d293349-d5c7-4dde-8dc5-60732203edc5" Mar 19 13:04:00.401154 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce-shm.mount: Deactivated successfully. Mar 19 13:04:01.154355 kubelet[2040]: E0319 13:04:01.154276 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:01.310480 kubelet[2040]: I0319 13:04:01.309581 2040 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce" Mar 19 13:04:01.312841 containerd[1526]: time="2025-03-19T13:04:01.312786925Z" level=info msg="StopPodSandbox for \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\"" Mar 19 13:04:01.313438 containerd[1526]: time="2025-03-19T13:04:01.313364366Z" level=info msg="Ensure that sandbox ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce in task-service has been cleanup successfully" Mar 19 13:04:01.313926 containerd[1526]: time="2025-03-19T13:04:01.313849763Z" level=info msg="TearDown network for sandbox \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\" successfully" Mar 19 13:04:01.313926 containerd[1526]: time="2025-03-19T13:04:01.313908994Z" level=info msg="StopPodSandbox for \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\" returns successfully" Mar 19 13:04:01.316494 containerd[1526]: time="2025-03-19T13:04:01.316064366Z" level=info msg="StopPodSandbox for \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\"" Mar 19 13:04:01.316494 containerd[1526]: time="2025-03-19T13:04:01.316157390Z" level=info msg="TearDown network for sandbox \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\" successfully" Mar 19 13:04:01.316494 containerd[1526]: time="2025-03-19T13:04:01.316169553Z" level=info msg="StopPodSandbox for \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\" returns successfully" Mar 19 13:04:01.318097 systemd[1]: run-netns-cni\x2dc3971a24\x2d69f0\x2d6f7f\x2dab87\x2d90c88a2f9172.mount: Deactivated successfully. Mar 19 13:04:01.321365 containerd[1526]: time="2025-03-19T13:04:01.321300481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6pv8,Uid:6d293349-d5c7-4dde-8dc5-60732203edc5,Namespace:calico-system,Attempt:2,}" Mar 19 13:04:01.402404 containerd[1526]: time="2025-03-19T13:04:01.402347714Z" level=error msg="Failed to destroy network for sandbox \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:01.406035 containerd[1526]: time="2025-03-19T13:04:01.404734770Z" level=error msg="encountered an error cleaning up failed sandbox \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:01.406035 containerd[1526]: time="2025-03-19T13:04:01.404818727Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6pv8,Uid:6d293349-d5c7-4dde-8dc5-60732203edc5,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:01.406232 kubelet[2040]: E0319 13:04:01.405084 2040 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:01.406232 kubelet[2040]: E0319 13:04:01.405142 2040 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6pv8" Mar 19 13:04:01.406232 kubelet[2040]: E0319 13:04:01.405172 2040 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6pv8" Mar 19 13:04:01.406395 kubelet[2040]: E0319 13:04:01.405217 2040 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k6pv8_calico-system(6d293349-d5c7-4dde-8dc5-60732203edc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k6pv8_calico-system(6d293349-d5c7-4dde-8dc5-60732203edc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k6pv8" podUID="6d293349-d5c7-4dde-8dc5-60732203edc5" Mar 19 13:04:01.407207 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105-shm.mount: Deactivated successfully. Mar 19 13:04:02.155054 kubelet[2040]: E0319 13:04:02.154978 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:02.314740 kubelet[2040]: I0319 13:04:02.314665 2040 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105" Mar 19 13:04:02.316148 containerd[1526]: time="2025-03-19T13:04:02.315663418Z" level=info msg="StopPodSandbox for \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\"" Mar 19 13:04:02.316148 containerd[1526]: time="2025-03-19T13:04:02.315941909Z" level=info msg="Ensure that sandbox d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105 in task-service has been cleanup successfully" Mar 19 13:04:02.318063 containerd[1526]: time="2025-03-19T13:04:02.318037760Z" level=info msg="TearDown network for sandbox \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\" successfully" Mar 19 13:04:02.318161 containerd[1526]: time="2025-03-19T13:04:02.318146503Z" level=info msg="StopPodSandbox for \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\" returns successfully" Mar 19 13:04:02.318728 containerd[1526]: time="2025-03-19T13:04:02.318709397Z" level=info msg="StopPodSandbox for \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\"" Mar 19 13:04:02.318929 containerd[1526]: time="2025-03-19T13:04:02.318913759Z" level=info msg="TearDown network for sandbox \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\" successfully" Mar 19 13:04:02.319025 containerd[1526]: time="2025-03-19T13:04:02.318994469Z" level=info msg="StopPodSandbox for \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\" returns successfully" Mar 19 13:04:02.320316 containerd[1526]: time="2025-03-19T13:04:02.320107141Z" level=info msg="StopPodSandbox for \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\"" Mar 19 13:04:02.320316 containerd[1526]: time="2025-03-19T13:04:02.320199012Z" level=info msg="TearDown network for sandbox \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\" successfully" Mar 19 13:04:02.320316 containerd[1526]: time="2025-03-19T13:04:02.320212668Z" level=info msg="StopPodSandbox for \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\" returns successfully" Mar 19 13:04:02.321593 systemd[1]: run-netns-cni\x2deac291e3\x2d4589\x2d112b\x2d4046\x2d6834444e397e.mount: Deactivated successfully. Mar 19 13:04:02.322739 containerd[1526]: time="2025-03-19T13:04:02.322328697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6pv8,Uid:6d293349-d5c7-4dde-8dc5-60732203edc5,Namespace:calico-system,Attempt:3,}" Mar 19 13:04:02.415094 containerd[1526]: time="2025-03-19T13:04:02.414944952Z" level=error msg="Failed to destroy network for sandbox \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:02.417443 containerd[1526]: time="2025-03-19T13:04:02.417208126Z" level=error msg="encountered an error cleaning up failed sandbox \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:02.417443 containerd[1526]: time="2025-03-19T13:04:02.417325816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6pv8,Uid:6d293349-d5c7-4dde-8dc5-60732203edc5,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:02.417732 kubelet[2040]: E0319 13:04:02.417677 2040 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:02.417792 kubelet[2040]: E0319 13:04:02.417763 2040 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6pv8" Mar 19 13:04:02.417835 kubelet[2040]: E0319 13:04:02.417789 2040 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6pv8" Mar 19 13:04:02.417870 kubelet[2040]: E0319 13:04:02.417846 2040 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k6pv8_calico-system(6d293349-d5c7-4dde-8dc5-60732203edc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k6pv8_calico-system(6d293349-d5c7-4dde-8dc5-60732203edc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k6pv8" podUID="6d293349-d5c7-4dde-8dc5-60732203edc5" Mar 19 13:04:02.419726 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3-shm.mount: Deactivated successfully. Mar 19 13:04:03.156106 kubelet[2040]: E0319 13:04:03.156038 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:03.321260 kubelet[2040]: I0319 13:04:03.321216 2040 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3" Mar 19 13:04:03.322930 containerd[1526]: time="2025-03-19T13:04:03.322431194Z" level=info msg="StopPodSandbox for \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\"" Mar 19 13:04:03.322930 containerd[1526]: time="2025-03-19T13:04:03.322708402Z" level=info msg="Ensure that sandbox ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3 in task-service has been cleanup successfully" Mar 19 13:04:03.326793 systemd[1]: run-netns-cni\x2d3b5a9f28\x2dfaa4\x2d359f\x2dfd8f\x2de35112064682.mount: Deactivated successfully. Mar 19 13:04:03.329028 containerd[1526]: time="2025-03-19T13:04:03.328972682Z" level=info msg="TearDown network for sandbox \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\" successfully" Mar 19 13:04:03.329257 containerd[1526]: time="2025-03-19T13:04:03.329163919Z" level=info msg="StopPodSandbox for \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\" returns successfully" Mar 19 13:04:03.331297 containerd[1526]: time="2025-03-19T13:04:03.331075615Z" level=info msg="StopPodSandbox for \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\"" Mar 19 13:04:03.331297 containerd[1526]: time="2025-03-19T13:04:03.331206311Z" level=info msg="TearDown network for sandbox \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\" successfully" Mar 19 13:04:03.331297 containerd[1526]: time="2025-03-19T13:04:03.331219515Z" level=info msg="StopPodSandbox for \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\" returns successfully" Mar 19 13:04:03.331877 containerd[1526]: time="2025-03-19T13:04:03.331831640Z" level=info msg="StopPodSandbox for \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\"" Mar 19 13:04:03.332459 containerd[1526]: time="2025-03-19T13:04:03.332438586Z" level=info msg="TearDown network for sandbox \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\" successfully" Mar 19 13:04:03.332661 containerd[1526]: time="2025-03-19T13:04:03.332553842Z" level=info msg="StopPodSandbox for \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\" returns successfully" Mar 19 13:04:03.333167 containerd[1526]: time="2025-03-19T13:04:03.333030994Z" level=info msg="StopPodSandbox for \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\"" Mar 19 13:04:03.333167 containerd[1526]: time="2025-03-19T13:04:03.333127715Z" level=info msg="TearDown network for sandbox \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\" successfully" Mar 19 13:04:03.333167 containerd[1526]: time="2025-03-19T13:04:03.333140709Z" level=info msg="StopPodSandbox for \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\" returns successfully" Mar 19 13:04:03.334330 containerd[1526]: time="2025-03-19T13:04:03.334262498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6pv8,Uid:6d293349-d5c7-4dde-8dc5-60732203edc5,Namespace:calico-system,Attempt:4,}" Mar 19 13:04:03.492263 containerd[1526]: time="2025-03-19T13:04:03.492007536Z" level=error msg="Failed to destroy network for sandbox \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:03.496184 containerd[1526]: time="2025-03-19T13:04:03.495340762Z" level=error msg="encountered an error cleaning up failed sandbox \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:03.496184 containerd[1526]: time="2025-03-19T13:04:03.495436771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6pv8,Uid:6d293349-d5c7-4dde-8dc5-60732203edc5,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:03.496402 kubelet[2040]: E0319 13:04:03.495745 2040 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:03.496402 kubelet[2040]: E0319 13:04:03.495813 2040 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6pv8" Mar 19 13:04:03.496402 kubelet[2040]: E0319 13:04:03.495840 2040 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6pv8" Mar 19 13:04:03.495480 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119-shm.mount: Deactivated successfully. Mar 19 13:04:03.496597 kubelet[2040]: E0319 13:04:03.495920 2040 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k6pv8_calico-system(6d293349-d5c7-4dde-8dc5-60732203edc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k6pv8_calico-system(6d293349-d5c7-4dde-8dc5-60732203edc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k6pv8" podUID="6d293349-d5c7-4dde-8dc5-60732203edc5" Mar 19 13:04:04.014722 systemd[1]: Created slice kubepods-besteffort-pod8802cf18_2cd9_4f10_8c3f_a4a7535edef8.slice - libcontainer container kubepods-besteffort-pod8802cf18_2cd9_4f10_8c3f_a4a7535edef8.slice. Mar 19 13:04:04.094989 kubelet[2040]: I0319 13:04:04.094923 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxrxc\" (UniqueName: \"kubernetes.io/projected/8802cf18-2cd9-4f10-8c3f-a4a7535edef8-kube-api-access-gxrxc\") pod \"nginx-deployment-7fcdb87857-v5gmn\" (UID: \"8802cf18-2cd9-4f10-8c3f-a4a7535edef8\") " pod="default/nginx-deployment-7fcdb87857-v5gmn" Mar 19 13:04:04.156820 kubelet[2040]: E0319 13:04:04.156765 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:04.320129 containerd[1526]: time="2025-03-19T13:04:04.319717904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-v5gmn,Uid:8802cf18-2cd9-4f10-8c3f-a4a7535edef8,Namespace:default,Attempt:0,}" Mar 19 13:04:04.330903 kubelet[2040]: I0319 13:04:04.330567 2040 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119" Mar 19 13:04:04.334322 containerd[1526]: time="2025-03-19T13:04:04.331535552Z" level=info msg="StopPodSandbox for \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\"" Mar 19 13:04:04.334322 containerd[1526]: time="2025-03-19T13:04:04.331765452Z" level=info msg="Ensure that sandbox 7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119 in task-service has been cleanup successfully" Mar 19 13:04:04.336766 containerd[1526]: time="2025-03-19T13:04:04.336035261Z" level=info msg="TearDown network for sandbox \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\" successfully" Mar 19 13:04:04.336766 containerd[1526]: time="2025-03-19T13:04:04.336076659Z" level=info msg="StopPodSandbox for \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\" returns successfully" Mar 19 13:04:04.336175 systemd[1]: run-netns-cni\x2d10f29f82\x2d777f\x2d9ec1\x2dfced\x2d56f7bbecabcb.mount: Deactivated successfully. Mar 19 13:04:04.339165 containerd[1526]: time="2025-03-19T13:04:04.338663448Z" level=info msg="StopPodSandbox for \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\"" Mar 19 13:04:04.339165 containerd[1526]: time="2025-03-19T13:04:04.338774025Z" level=info msg="TearDown network for sandbox \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\" successfully" Mar 19 13:04:04.339165 containerd[1526]: time="2025-03-19T13:04:04.338826534Z" level=info msg="StopPodSandbox for \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\" returns successfully" Mar 19 13:04:04.339529 containerd[1526]: time="2025-03-19T13:04:04.339433779Z" level=info msg="StopPodSandbox for \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\"" Mar 19 13:04:04.339586 containerd[1526]: time="2025-03-19T13:04:04.339555186Z" level=info msg="TearDown network for sandbox \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\" successfully" Mar 19 13:04:04.339586 containerd[1526]: time="2025-03-19T13:04:04.339570174Z" level=info msg="StopPodSandbox for \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\" returns successfully" Mar 19 13:04:04.340568 containerd[1526]: time="2025-03-19T13:04:04.340345084Z" level=info msg="StopPodSandbox for \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\"" Mar 19 13:04:04.340568 containerd[1526]: time="2025-03-19T13:04:04.340429312Z" level=info msg="TearDown network for sandbox \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\" successfully" Mar 19 13:04:04.340568 containerd[1526]: time="2025-03-19T13:04:04.340442015Z" level=info msg="StopPodSandbox for \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\" returns successfully" Mar 19 13:04:04.341536 containerd[1526]: time="2025-03-19T13:04:04.341312304Z" level=info msg="StopPodSandbox for \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\"" Mar 19 13:04:04.341536 containerd[1526]: time="2025-03-19T13:04:04.341415527Z" level=info msg="TearDown network for sandbox \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\" successfully" Mar 19 13:04:04.341536 containerd[1526]: time="2025-03-19T13:04:04.341425245Z" level=info msg="StopPodSandbox for \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\" returns successfully" Mar 19 13:04:04.342637 containerd[1526]: time="2025-03-19T13:04:04.342544048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6pv8,Uid:6d293349-d5c7-4dde-8dc5-60732203edc5,Namespace:calico-system,Attempt:5,}" Mar 19 13:04:04.477260 containerd[1526]: time="2025-03-19T13:04:04.477096111Z" level=error msg="Failed to destroy network for sandbox \"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:04.478378 containerd[1526]: time="2025-03-19T13:04:04.478275238Z" level=error msg="encountered an error cleaning up failed sandbox \"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:04.478378 containerd[1526]: time="2025-03-19T13:04:04.478341140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-v5gmn,Uid:8802cf18-2cd9-4f10-8c3f-a4a7535edef8,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:04.480079 kubelet[2040]: E0319 13:04:04.479981 2040 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:04.480079 kubelet[2040]: E0319 13:04:04.480054 2040 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-v5gmn" Mar 19 13:04:04.480268 kubelet[2040]: E0319 13:04:04.480079 2040 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-v5gmn" Mar 19 13:04:04.480268 kubelet[2040]: E0319 13:04:04.480130 2040 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-v5gmn_default(8802cf18-2cd9-4f10-8c3f-a4a7535edef8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-v5gmn_default(8802cf18-2cd9-4f10-8c3f-a4a7535edef8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-v5gmn" podUID="8802cf18-2cd9-4f10-8c3f-a4a7535edef8" Mar 19 13:04:04.496625 containerd[1526]: time="2025-03-19T13:04:04.496582178Z" level=error msg="Failed to destroy network for sandbox \"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:04.497381 containerd[1526]: time="2025-03-19T13:04:04.497333193Z" level=error msg="encountered an error cleaning up failed sandbox \"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:04.497381 containerd[1526]: time="2025-03-19T13:04:04.497394047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6pv8,Uid:6d293349-d5c7-4dde-8dc5-60732203edc5,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:04.498028 kubelet[2040]: E0319 13:04:04.497760 2040 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:04.498028 kubelet[2040]: E0319 13:04:04.497858 2040 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6pv8" Mar 19 13:04:04.498028 kubelet[2040]: E0319 13:04:04.497882 2040 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6pv8" Mar 19 13:04:04.498545 kubelet[2040]: E0319 13:04:04.498295 2040 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k6pv8_calico-system(6d293349-d5c7-4dde-8dc5-60732203edc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k6pv8_calico-system(6d293349-d5c7-4dde-8dc5-60732203edc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k6pv8" podUID="6d293349-d5c7-4dde-8dc5-60732203edc5" Mar 19 13:04:05.138244 kubelet[2040]: E0319 13:04:05.138163 2040 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:05.157878 kubelet[2040]: E0319 13:04:05.157816 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:05.327670 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e-shm.mount: Deactivated successfully. Mar 19 13:04:05.327755 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1-shm.mount: Deactivated successfully. Mar 19 13:04:05.335055 kubelet[2040]: I0319 13:04:05.334979 2040 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e" Mar 19 13:04:05.335843 containerd[1526]: time="2025-03-19T13:04:05.335684166Z" level=info msg="StopPodSandbox for \"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\"" Mar 19 13:04:05.336787 kubelet[2040]: I0319 13:04:05.336728 2040 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1" Mar 19 13:04:05.337430 containerd[1526]: time="2025-03-19T13:04:05.337351115Z" level=info msg="Ensure that sandbox 26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e in task-service has been cleanup successfully" Mar 19 13:04:05.340015 containerd[1526]: time="2025-03-19T13:04:05.337651367Z" level=info msg="StopPodSandbox for \"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\"" Mar 19 13:04:05.340015 containerd[1526]: time="2025-03-19T13:04:05.337764960Z" level=info msg="Ensure that sandbox 997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1 in task-service has been cleanup successfully" Mar 19 13:04:05.340015 containerd[1526]: time="2025-03-19T13:04:05.337881819Z" level=info msg="TearDown network for sandbox \"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\" successfully" Mar 19 13:04:05.340015 containerd[1526]: time="2025-03-19T13:04:05.337908909Z" level=info msg="StopPodSandbox for \"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\" returns successfully" Mar 19 13:04:05.340015 containerd[1526]: time="2025-03-19T13:04:05.339672518Z" level=info msg="StopPodSandbox for \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\"" Mar 19 13:04:05.340015 containerd[1526]: time="2025-03-19T13:04:05.339737190Z" level=info msg="TearDown network for sandbox \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\" successfully" Mar 19 13:04:05.340015 containerd[1526]: time="2025-03-19T13:04:05.339746167Z" level=info msg="StopPodSandbox for \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\" returns successfully" Mar 19 13:04:05.339376 systemd[1]: run-netns-cni\x2dba5744a2\x2d6c28\x2d619f\x2d73b9\x2d65d34884886d.mount: Deactivated successfully. Mar 19 13:04:05.340476 containerd[1526]: time="2025-03-19T13:04:05.340175329Z" level=info msg="StopPodSandbox for \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\"" Mar 19 13:04:05.340476 containerd[1526]: time="2025-03-19T13:04:05.340224021Z" level=info msg="TearDown network for sandbox \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\" successfully" Mar 19 13:04:05.340476 containerd[1526]: time="2025-03-19T13:04:05.340245922Z" level=info msg="StopPodSandbox for \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\" returns successfully" Mar 19 13:04:05.340476 containerd[1526]: time="2025-03-19T13:04:05.340404259Z" level=info msg="TearDown network for sandbox \"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\" successfully" Mar 19 13:04:05.340476 containerd[1526]: time="2025-03-19T13:04:05.340416020Z" level=info msg="StopPodSandbox for \"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\" returns successfully" Mar 19 13:04:05.339451 systemd[1]: run-netns-cni\x2de6fe3f88\x2d7fb2\x2d941d\x2d7749\x2da50c3c94e8d9.mount: Deactivated successfully. Mar 19 13:04:05.340674 containerd[1526]: time="2025-03-19T13:04:05.340488024Z" level=info msg="StopPodSandbox for \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\"" Mar 19 13:04:05.340674 containerd[1526]: time="2025-03-19T13:04:05.340535083Z" level=info msg="TearDown network for sandbox \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\" successfully" Mar 19 13:04:05.340674 containerd[1526]: time="2025-03-19T13:04:05.340541865Z" level=info msg="StopPodSandbox for \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\" returns successfully" Mar 19 13:04:05.344546 containerd[1526]: time="2025-03-19T13:04:05.343677442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-v5gmn,Uid:8802cf18-2cd9-4f10-8c3f-a4a7535edef8,Namespace:default,Attempt:1,}" Mar 19 13:04:05.344546 containerd[1526]: time="2025-03-19T13:04:05.343771599Z" level=info msg="StopPodSandbox for \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\"" Mar 19 13:04:05.344546 containerd[1526]: time="2025-03-19T13:04:05.343851257Z" level=info msg="TearDown network for sandbox \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\" successfully" Mar 19 13:04:05.344546 containerd[1526]: time="2025-03-19T13:04:05.343863290Z" level=info msg="StopPodSandbox for \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\" returns successfully" Mar 19 13:04:05.345580 containerd[1526]: time="2025-03-19T13:04:05.345435321Z" level=info msg="StopPodSandbox for \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\"" Mar 19 13:04:05.345580 containerd[1526]: time="2025-03-19T13:04:05.345538915Z" level=info msg="TearDown network for sandbox \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\" successfully" Mar 19 13:04:05.345580 containerd[1526]: time="2025-03-19T13:04:05.345552079Z" level=info msg="StopPodSandbox for \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\" returns successfully" Mar 19 13:04:05.346203 containerd[1526]: time="2025-03-19T13:04:05.346119752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6pv8,Uid:6d293349-d5c7-4dde-8dc5-60732203edc5,Namespace:calico-system,Attempt:6,}" Mar 19 13:04:05.468524 containerd[1526]: time="2025-03-19T13:04:05.468403534Z" level=error msg="Failed to destroy network for sandbox \"7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:05.469501 containerd[1526]: time="2025-03-19T13:04:05.469025709Z" level=error msg="encountered an error cleaning up failed sandbox \"7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:05.469501 containerd[1526]: time="2025-03-19T13:04:05.469155722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-v5gmn,Uid:8802cf18-2cd9-4f10-8c3f-a4a7535edef8,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:05.469852 kubelet[2040]: E0319 13:04:05.469767 2040 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:05.469852 kubelet[2040]: E0319 13:04:05.469819 2040 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-v5gmn" Mar 19 13:04:05.469852 kubelet[2040]: E0319 13:04:05.469840 2040 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-v5gmn" Mar 19 13:04:05.470335 kubelet[2040]: E0319 13:04:05.469878 2040 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-v5gmn_default(8802cf18-2cd9-4f10-8c3f-a4a7535edef8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-v5gmn_default(8802cf18-2cd9-4f10-8c3f-a4a7535edef8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-v5gmn" podUID="8802cf18-2cd9-4f10-8c3f-a4a7535edef8" Mar 19 13:04:05.482811 containerd[1526]: time="2025-03-19T13:04:05.482746791Z" level=error msg="Failed to destroy network for sandbox \"eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:05.483395 containerd[1526]: time="2025-03-19T13:04:05.483366370Z" level=error msg="encountered an error cleaning up failed sandbox \"eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:05.483741 containerd[1526]: time="2025-03-19T13:04:05.483641835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6pv8,Uid:6d293349-d5c7-4dde-8dc5-60732203edc5,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:05.484138 kubelet[2040]: E0319 13:04:05.483940 2040 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 13:04:05.484138 kubelet[2040]: E0319 13:04:05.484027 2040 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6pv8" Mar 19 13:04:05.484138 kubelet[2040]: E0319 13:04:05.484052 2040 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6pv8" Mar 19 13:04:05.484289 kubelet[2040]: E0319 13:04:05.484121 2040 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k6pv8_calico-system(6d293349-d5c7-4dde-8dc5-60732203edc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k6pv8_calico-system(6d293349-d5c7-4dde-8dc5-60732203edc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k6pv8" podUID="6d293349-d5c7-4dde-8dc5-60732203edc5" Mar 19 13:04:06.015508 containerd[1526]: time="2025-03-19T13:04:06.015445733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:04:06.016955 containerd[1526]: time="2025-03-19T13:04:06.016531795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 19 13:04:06.021371 containerd[1526]: time="2025-03-19T13:04:06.021319253Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:04:06.023981 containerd[1526]: time="2025-03-19T13:04:06.023929416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:04:06.024747 containerd[1526]: time="2025-03-19T13:04:06.024717200Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 7.727969721s" Mar 19 13:04:06.024802 containerd[1526]: time="2025-03-19T13:04:06.024748990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 19 13:04:06.040617 containerd[1526]: time="2025-03-19T13:04:06.040557029Z" level=info msg="CreateContainer within sandbox \"358674ba06efb69118b61af5d5c77b1f7a62479dd6c4cf6ac087b33161228142\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 19 13:04:06.057858 containerd[1526]: time="2025-03-19T13:04:06.057792267Z" level=info msg="CreateContainer within sandbox \"358674ba06efb69118b61af5d5c77b1f7a62479dd6c4cf6ac087b33161228142\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d889666dcc4cc62be67fc6f50887e48f78e896942962703cc419cae2f337ae63\"" Mar 19 13:04:06.058434 containerd[1526]: time="2025-03-19T13:04:06.058397841Z" level=info msg="StartContainer for \"d889666dcc4cc62be67fc6f50887e48f78e896942962703cc419cae2f337ae63\"" Mar 19 13:04:06.156186 systemd[1]: Started cri-containerd-d889666dcc4cc62be67fc6f50887e48f78e896942962703cc419cae2f337ae63.scope - libcontainer container d889666dcc4cc62be67fc6f50887e48f78e896942962703cc419cae2f337ae63. Mar 19 13:04:06.160317 kubelet[2040]: E0319 13:04:06.159423 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:06.192527 containerd[1526]: time="2025-03-19T13:04:06.192485923Z" level=info msg="StartContainer for \"d889666dcc4cc62be67fc6f50887e48f78e896942962703cc419cae2f337ae63\" returns successfully" Mar 19 13:04:06.278865 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 19 13:04:06.279004 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 19 13:04:06.331408 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62-shm.mount: Deactivated successfully. Mar 19 13:04:06.331714 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0-shm.mount: Deactivated successfully. Mar 19 13:04:06.331802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2133501926.mount: Deactivated successfully. Mar 19 13:04:06.346517 kubelet[2040]: I0319 13:04:06.345360 2040 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62" Mar 19 13:04:06.346672 containerd[1526]: time="2025-03-19T13:04:06.346100632Z" level=info msg="StopPodSandbox for \"eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62\"" Mar 19 13:04:06.346672 containerd[1526]: time="2025-03-19T13:04:06.346288864Z" level=info msg="Ensure that sandbox eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62 in task-service has been cleanup successfully" Mar 19 13:04:06.349296 containerd[1526]: time="2025-03-19T13:04:06.349255144Z" level=info msg="TearDown network for sandbox \"eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62\" successfully" Mar 19 13:04:06.352028 systemd[1]: run-netns-cni\x2d94475df5\x2d4aaa\x2dbe71\x2d0114\x2dfa2c5d7c156d.mount: Deactivated successfully. Mar 19 13:04:06.353270 containerd[1526]: time="2025-03-19T13:04:06.352135233Z" level=info msg="StopPodSandbox for \"eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62\" returns successfully" Mar 19 13:04:06.354553 containerd[1526]: time="2025-03-19T13:04:06.354507412Z" level=info msg="StopPodSandbox for \"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\"" Mar 19 13:04:06.354673 containerd[1526]: time="2025-03-19T13:04:06.354645380Z" level=info msg="TearDown network for sandbox \"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\" successfully" Mar 19 13:04:06.354759 containerd[1526]: time="2025-03-19T13:04:06.354672901Z" level=info msg="StopPodSandbox for \"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\" returns successfully" Mar 19 13:04:06.356269 containerd[1526]: time="2025-03-19T13:04:06.355505229Z" level=info msg="StopPodSandbox for \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\"" Mar 19 13:04:06.356269 containerd[1526]: time="2025-03-19T13:04:06.355690576Z" level=info msg="TearDown network for sandbox \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\" successfully" Mar 19 13:04:06.356269 containerd[1526]: time="2025-03-19T13:04:06.355709561Z" level=info msg="StopPodSandbox for \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\" returns successfully" Mar 19 13:04:06.356599 kubelet[2040]: I0319 13:04:06.356575 2040 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0" Mar 19 13:04:06.357100 containerd[1526]: time="2025-03-19T13:04:06.356879490Z" level=info msg="StopPodSandbox for \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\"" Mar 19 13:04:06.358710 containerd[1526]: time="2025-03-19T13:04:06.357723509Z" level=info msg="TearDown network for sandbox \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\" successfully" Mar 19 13:04:06.358710 containerd[1526]: time="2025-03-19T13:04:06.357751092Z" level=info msg="StopPodSandbox for \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\" returns successfully" Mar 19 13:04:06.359131 containerd[1526]: time="2025-03-19T13:04:06.359107029Z" level=info msg="StopPodSandbox for \"7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0\"" Mar 19 13:04:06.359275 containerd[1526]: time="2025-03-19T13:04:06.359242331Z" level=info msg="StopPodSandbox for \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\"" Mar 19 13:04:06.359381 containerd[1526]: time="2025-03-19T13:04:06.359356315Z" level=info msg="TearDown network for sandbox \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\" successfully" Mar 19 13:04:06.359424 containerd[1526]: time="2025-03-19T13:04:06.359379006Z" level=info msg="StopPodSandbox for \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\" returns successfully" Mar 19 13:04:06.359660 containerd[1526]: time="2025-03-19T13:04:06.359637822Z" level=info msg="Ensure that sandbox 7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0 in task-service has been cleanup successfully" Mar 19 13:04:06.364929 containerd[1526]: time="2025-03-19T13:04:06.362964305Z" level=info msg="StopPodSandbox for \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\"" Mar 19 13:04:06.364929 containerd[1526]: time="2025-03-19T13:04:06.363127772Z" level=info msg="TearDown network for sandbox \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\" successfully" Mar 19 13:04:06.364929 containerd[1526]: time="2025-03-19T13:04:06.363142600Z" level=info msg="StopPodSandbox for \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\" returns successfully" Mar 19 13:04:06.364929 containerd[1526]: time="2025-03-19T13:04:06.363613610Z" level=info msg="StopPodSandbox for \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\"" Mar 19 13:04:06.364929 containerd[1526]: time="2025-03-19T13:04:06.363699983Z" level=info msg="TearDown network for sandbox \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\" successfully" Mar 19 13:04:06.364929 containerd[1526]: time="2025-03-19T13:04:06.363712004Z" level=info msg="StopPodSandbox for \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\" returns successfully" Mar 19 13:04:06.367974 containerd[1526]: time="2025-03-19T13:04:06.365689254Z" level=info msg="TearDown network for sandbox \"7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0\" successfully" Mar 19 13:04:06.366147 systemd[1]: run-netns-cni\x2de8c5158e\x2d0e98\x2dc3d0\x2d9c84\x2d8c694aaf4198.mount: Deactivated successfully. Mar 19 13:04:06.370555 containerd[1526]: time="2025-03-19T13:04:06.369985683Z" level=info msg="StopPodSandbox for \"7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0\" returns successfully" Mar 19 13:04:06.370555 containerd[1526]: time="2025-03-19T13:04:06.366079615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6pv8,Uid:6d293349-d5c7-4dde-8dc5-60732203edc5,Namespace:calico-system,Attempt:7,}" Mar 19 13:04:06.372810 containerd[1526]: time="2025-03-19T13:04:06.372781605Z" level=info msg="StopPodSandbox for \"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\"" Mar 19 13:04:06.373482 containerd[1526]: time="2025-03-19T13:04:06.373105281Z" level=info msg="TearDown network for sandbox \"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\" successfully" Mar 19 13:04:06.373482 containerd[1526]: time="2025-03-19T13:04:06.373128614Z" level=info msg="StopPodSandbox for \"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\" returns successfully" Mar 19 13:04:06.373684 kubelet[2040]: I0319 13:04:06.373652 2040 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9qnkv" podStartSLOduration=3.621984596 podStartE2EDuration="21.373636785s" podCreationTimestamp="2025-03-19 13:03:45 +0000 UTC" firstStartedPulling="2025-03-19 13:03:48.27400507 +0000 UTC m=+3.932936636" lastFinishedPulling="2025-03-19 13:04:06.025657259 +0000 UTC m=+21.684588825" observedRunningTime="2025-03-19 13:04:06.371209554 +0000 UTC m=+22.030141138" watchObservedRunningTime="2025-03-19 13:04:06.373636785 +0000 UTC m=+22.032568350" Mar 19 13:04:06.373784 containerd[1526]: time="2025-03-19T13:04:06.373750737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-v5gmn,Uid:8802cf18-2cd9-4f10-8c3f-a4a7535edef8,Namespace:default,Attempt:2,}" Mar 19 13:04:06.724933 systemd-networkd[1429]: cali9065ca0f894: Link UP Mar 19 13:04:06.725689 systemd-networkd[1429]: cali9065ca0f894: Gained carrier Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.440 [INFO][2849] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.491 [INFO][2849] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-csi--node--driver--k6pv8-eth0 csi-node-driver- calico-system 6d293349-d5c7-4dde-8dc5-60732203edc5 1627 0 2025-03-19 13:03:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:54877d75d5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.4 csi-node-driver-k6pv8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9065ca0f894 [] []}} ContainerID="9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" Namespace="calico-system" Pod="csi-node-driver-k6pv8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--k6pv8-" Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.492 [INFO][2849] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" Namespace="calico-system" Pod="csi-node-driver-k6pv8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--k6pv8-eth0" Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.525 [INFO][2882] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" HandleID="k8s-pod-network.9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" Workload="10.0.0.4-k8s-csi--node--driver--k6pv8-eth0" Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.583 [INFO][2882] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" HandleID="k8s-pod-network.9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" Workload="10.0.0.4-k8s-csi--node--driver--k6pv8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a8a70), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.4", "pod":"csi-node-driver-k6pv8", "timestamp":"2025-03-19 13:04:06.525270878 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.583 [INFO][2882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.583 [INFO][2882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.583 [INFO][2882] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.586 [INFO][2882] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" host="10.0.0.4" Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.593 [INFO][2882] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.601 [INFO][2882] ipam/ipam.go 521: Ran out of existing affine blocks for host host="10.0.0.4" Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.605 [INFO][2882] ipam/ipam.go 538: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="10.0.0.4" Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.618 [INFO][2882] ipam/ipam.go 550: Found unclaimed block host="10.0.0.4" subnet=192.168.99.192/26 Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.618 [INFO][2882] ipam/ipam_block_reader_writer.go 171: Trying to create affinity in pending state host="10.0.0.4" subnet=192.168.99.192/26 Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.626 [INFO][2882] ipam/ipam_block_reader_writer.go 182: Block affinity already exists, getting existing affinity host="10.0.0.4" subnet=192.168.99.192/26 Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.630 [INFO][2882] ipam/ipam_block_reader_writer.go 190: Got existing affinity host="10.0.0.4" subnet=192.168.99.192/26 Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.630 [INFO][2882] ipam/ipam_block_reader_writer.go 198: Existing affinity is already confirmed host="10.0.0.4" subnet=192.168.99.192/26 Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.630 [INFO][2882] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.633 [INFO][2882] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.634 [INFO][2882] ipam/ipam.go 585: Block '192.168.99.192/26' has 64 free ips which is more than 1 ips required. host="10.0.0.4" subnet=192.168.99.192/26 Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.634 [INFO][2882] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" host="10.0.0.4" Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.636 [INFO][2882] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e Mar 19 13:04:06.745947 containerd[1526]: 2025-03-19 13:04:06.642 [INFO][2882] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" host="10.0.0.4" Mar 19 13:04:06.752790 containerd[1526]: 2025-03-19 13:04:06.649 [ERROR][2882] ipam/customresource.go 183: Error updating resource Key=IPAMBlock(192-168-99-192-26) Name="192-168-99-192-26" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"192-168-99-192-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"1741", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.IPAMBlockSpec{CIDR:"192.168.99.192/26", Affinity:(*string)(0xc0006742b0), Allocations:[]*int{(*int)(0xc00065e3c8), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0xc0004a8a70), AttrSecondary:map[string]string{"namespace":"calico-system", "node":"10.0.0.4", "pod":"csi-node-driver-k6pv8", "timestamp":"2025-03-19 13:04:06.525270878 +0000 UTC"}}}, SequenceNumber:0x182e35f859accecf, SequenceNumberForAllocation:map[string]uint64{"0":0x182e35f859accece}, Deleted:false, DeprecatedStrictAffinity:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "192-168-99-192-26": the object has been modified; please apply your changes to the latest version and try again Mar 19 13:04:06.752790 containerd[1526]: 2025-03-19 13:04:06.649 [INFO][2882] ipam/ipam.go 1207: Failed to update block block=192.168.99.192/26 error=update conflict: IPAMBlock(192-168-99-192-26) handle="k8s-pod-network.9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" host="10.0.0.4" Mar 19 13:04:06.752790 containerd[1526]: 2025-03-19 13:04:06.676 [INFO][2882] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" host="10.0.0.4" Mar 19 13:04:06.752790 containerd[1526]: 2025-03-19 13:04:06.678 [INFO][2882] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e Mar 19 13:04:06.752790 containerd[1526]: 2025-03-19 13:04:06.682 [INFO][2882] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" host="10.0.0.4" Mar 19 13:04:06.752790 containerd[1526]: 2025-03-19 13:04:06.688 [INFO][2882] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.193/26] block=192.168.99.192/26 handle="k8s-pod-network.9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" host="10.0.0.4" Mar 19 13:04:06.752790 containerd[1526]: 2025-03-19 13:04:06.688 [INFO][2882] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.193/26] handle="k8s-pod-network.9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" host="10.0.0.4" Mar 19 13:04:06.752790 containerd[1526]: 2025-03-19 13:04:06.688 [INFO][2882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 13:04:06.752790 containerd[1526]: 2025-03-19 13:04:06.688 [INFO][2882] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.193/26] IPv6=[] ContainerID="9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" HandleID="k8s-pod-network.9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" Workload="10.0.0.4-k8s-csi--node--driver--k6pv8-eth0" Mar 19 13:04:06.752790 containerd[1526]: 2025-03-19 13:04:06.702 [INFO][2849] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" Namespace="calico-system" Pod="csi-node-driver-k6pv8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--k6pv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--k6pv8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d293349-d5c7-4dde-8dc5-60732203edc5", ResourceVersion:"1627", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 13, 3, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"54877d75d5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"csi-node-driver-k6pv8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9065ca0f894", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 13:04:06.754587 containerd[1526]: 2025-03-19 13:04:06.702 [INFO][2849] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.193/32] ContainerID="9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" Namespace="calico-system" Pod="csi-node-driver-k6pv8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--k6pv8-eth0" Mar 19 13:04:06.754587 containerd[1526]: 2025-03-19 13:04:06.702 [INFO][2849] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9065ca0f894 ContainerID="9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" Namespace="calico-system" Pod="csi-node-driver-k6pv8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--k6pv8-eth0" Mar 19 13:04:06.754587 containerd[1526]: 2025-03-19 13:04:06.726 [INFO][2849] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" Namespace="calico-system" Pod="csi-node-driver-k6pv8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--k6pv8-eth0" Mar 19 13:04:06.754587 containerd[1526]: 2025-03-19 13:04:06.726 [INFO][2849] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" Namespace="calico-system" Pod="csi-node-driver-k6pv8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--k6pv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--k6pv8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d293349-d5c7-4dde-8dc5-60732203edc5", ResourceVersion:"1627", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 13, 3, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"54877d75d5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e", Pod:"csi-node-driver-k6pv8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9065ca0f894", MAC:"c2:8c:6b:54:ca:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 13:04:06.754587 containerd[1526]: 2025-03-19 13:04:06.737 [INFO][2849] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e" Namespace="calico-system" Pod="csi-node-driver-k6pv8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--k6pv8-eth0" Mar 19 13:04:06.760052 systemd-networkd[1429]: cali06ef6a8f5a8: Link UP Mar 19 13:04:06.760666 systemd-networkd[1429]: cali06ef6a8f5a8: Gained carrier Mar 19 13:04:06.775063 containerd[1526]: 2025-03-19 13:04:06.455 [INFO][2858] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 19 13:04:06.775063 containerd[1526]: 2025-03-19 13:04:06.491 [INFO][2858] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nginx--deployment--7fcdb87857--v5gmn-eth0 nginx-deployment-7fcdb87857- default 8802cf18-2cd9-4f10-8c3f-a4a7535edef8 1717 0 2025-03-19 13:04:03 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 nginx-deployment-7fcdb87857-v5gmn eth0 default [] [] [kns.default ksa.default.default] cali06ef6a8f5a8 [] []}} ContainerID="efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581" Namespace="default" Pod="nginx-deployment-7fcdb87857-v5gmn" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--v5gmn-" Mar 19 13:04:06.775063 containerd[1526]: 2025-03-19 13:04:06.492 [INFO][2858] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581" Namespace="default" Pod="nginx-deployment-7fcdb87857-v5gmn" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--v5gmn-eth0" Mar 19 13:04:06.775063 containerd[1526]: 2025-03-19 13:04:06.528 [INFO][2883] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581" HandleID="k8s-pod-network.efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581" Workload="10.0.0.4-k8s-nginx--deployment--7fcdb87857--v5gmn-eth0" Mar 19 13:04:06.775063 containerd[1526]: 2025-03-19 13:04:06.584 [INFO][2883] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581" HandleID="k8s-pod-network.efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581" Workload="10.0.0.4-k8s-nginx--deployment--7fcdb87857--v5gmn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001037c0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nginx-deployment-7fcdb87857-v5gmn", "timestamp":"2025-03-19 13:04:06.528314763 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 13:04:06.775063 containerd[1526]: 2025-03-19 13:04:06.584 [INFO][2883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 13:04:06.775063 containerd[1526]: 2025-03-19 13:04:06.689 [INFO][2883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 13:04:06.775063 containerd[1526]: 2025-03-19 13:04:06.689 [INFO][2883] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Mar 19 13:04:06.775063 containerd[1526]: 2025-03-19 13:04:06.692 [INFO][2883] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581" host="10.0.0.4" Mar 19 13:04:06.775063 containerd[1526]: 2025-03-19 13:04:06.703 [INFO][2883] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Mar 19 13:04:06.775063 containerd[1526]: 2025-03-19 13:04:06.714 [INFO][2883] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Mar 19 13:04:06.775063 containerd[1526]: 2025-03-19 13:04:06.718 [INFO][2883] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Mar 19 13:04:06.775063 containerd[1526]: 2025-03-19 13:04:06.728 [INFO][2883] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Mar 19 13:04:06.775063 containerd[1526]: 2025-03-19 13:04:06.728 [INFO][2883] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581" host="10.0.0.4" Mar 19 13:04:06.775063 containerd[1526]: 2025-03-19 13:04:06.730 [INFO][2883] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581 Mar 19 13:04:06.775063 containerd[1526]: 2025-03-19 13:04:06.739 [INFO][2883] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581" host="10.0.0.4" Mar 19 13:04:06.775063 containerd[1526]: 2025-03-19 13:04:06.750 [INFO][2883] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.194/26] block=192.168.99.192/26 handle="k8s-pod-network.efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581" host="10.0.0.4" Mar 19 13:04:06.775063 containerd[1526]: 2025-03-19 13:04:06.750 [INFO][2883] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.194/26] handle="k8s-pod-network.efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581" host="10.0.0.4" Mar 19 13:04:06.775063 containerd[1526]: 2025-03-19 13:04:06.750 [INFO][2883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 13:04:06.775063 containerd[1526]: 2025-03-19 13:04:06.750 [INFO][2883] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.194/26] IPv6=[] ContainerID="efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581" HandleID="k8s-pod-network.efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581" Workload="10.0.0.4-k8s-nginx--deployment--7fcdb87857--v5gmn-eth0" Mar 19 13:04:06.775753 containerd[1526]: 2025-03-19 13:04:06.752 [INFO][2858] cni-plugin/k8s.go 386: Populated endpoint ContainerID="efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581" Namespace="default" Pod="nginx-deployment-7fcdb87857-v5gmn" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--v5gmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--7fcdb87857--v5gmn-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"8802cf18-2cd9-4f10-8c3f-a4a7535edef8", ResourceVersion:"1717", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 13, 4, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-v5gmn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali06ef6a8f5a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 13:04:06.775753 containerd[1526]: 2025-03-19 13:04:06.753 [INFO][2858] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.194/32] ContainerID="efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581" Namespace="default" Pod="nginx-deployment-7fcdb87857-v5gmn" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--v5gmn-eth0" Mar 19 13:04:06.775753 containerd[1526]: 2025-03-19 13:04:06.753 [INFO][2858] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06ef6a8f5a8 ContainerID="efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581" Namespace="default" Pod="nginx-deployment-7fcdb87857-v5gmn" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--v5gmn-eth0" Mar 19 13:04:06.775753 containerd[1526]: 2025-03-19 13:04:06.757 [INFO][2858] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581" Namespace="default" Pod="nginx-deployment-7fcdb87857-v5gmn" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--v5gmn-eth0" Mar 19 13:04:06.775753 containerd[1526]: 2025-03-19 13:04:06.758 [INFO][2858] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581" Namespace="default" Pod="nginx-deployment-7fcdb87857-v5gmn" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--v5gmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--7fcdb87857--v5gmn-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"8802cf18-2cd9-4f10-8c3f-a4a7535edef8", ResourceVersion:"1717", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 13, 4, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581", Pod:"nginx-deployment-7fcdb87857-v5gmn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali06ef6a8f5a8", MAC:"ba:f7:f2:1f:6b:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 13:04:06.775753 containerd[1526]: 2025-03-19 13:04:06.770 [INFO][2858] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581" Namespace="default" Pod="nginx-deployment-7fcdb87857-v5gmn" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--v5gmn-eth0" Mar 19 13:04:06.779521 containerd[1526]: time="2025-03-19T13:04:06.779104476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 13:04:06.779521 containerd[1526]: time="2025-03-19T13:04:06.779338374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 13:04:06.779693 containerd[1526]: time="2025-03-19T13:04:06.779576079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 13:04:06.780447 containerd[1526]: time="2025-03-19T13:04:06.780043333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 13:04:06.796273 containerd[1526]: time="2025-03-19T13:04:06.796119564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 13:04:06.796273 containerd[1526]: time="2025-03-19T13:04:06.796209331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 13:04:06.796486 containerd[1526]: time="2025-03-19T13:04:06.796276166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 13:04:06.796768 containerd[1526]: time="2025-03-19T13:04:06.796376214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 13:04:06.799125 systemd[1]: Started cri-containerd-9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e.scope - libcontainer container 9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e. Mar 19 13:04:06.825325 systemd[1]: Started cri-containerd-efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581.scope - libcontainer container efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581. Mar 19 13:04:06.839690 containerd[1526]: time="2025-03-19T13:04:06.839632821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6pv8,Uid:6d293349-d5c7-4dde-8dc5-60732203edc5,Namespace:calico-system,Attempt:7,} returns sandbox id \"9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e\"" Mar 19 13:04:06.841618 containerd[1526]: time="2025-03-19T13:04:06.841584112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 19 13:04:06.868934 containerd[1526]: time="2025-03-19T13:04:06.868812359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-v5gmn,Uid:8802cf18-2cd9-4f10-8c3f-a4a7535edef8,Namespace:default,Attempt:2,} returns sandbox id \"efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581\"" Mar 19 13:04:07.161463 kubelet[2040]: E0319 13:04:07.161337 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:07.807183 systemd-networkd[1429]: cali06ef6a8f5a8: Gained IPv6LL Mar 19 13:04:07.935088 systemd-networkd[1429]: cali9065ca0f894: Gained IPv6LL Mar 19 13:04:08.001932 kernel: bpftool[3152]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 19 13:04:08.161831 kubelet[2040]: E0319 13:04:08.161779 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:08.271320 systemd-networkd[1429]: vxlan.calico: Link UP Mar 19 13:04:08.271330 systemd-networkd[1429]: vxlan.calico: Gained carrier Mar 19 13:04:08.953281 containerd[1526]: time="2025-03-19T13:04:08.953221048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:04:08.954422 containerd[1526]: time="2025-03-19T13:04:08.954384304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 19 13:04:08.955382 containerd[1526]: time="2025-03-19T13:04:08.955345504Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:04:08.957540 containerd[1526]: time="2025-03-19T13:04:08.957497450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:04:08.958368 containerd[1526]: time="2025-03-19T13:04:08.958244118Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 2.116623959s" Mar 19 13:04:08.958368 containerd[1526]: time="2025-03-19T13:04:08.958276108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 19 13:04:08.960055 containerd[1526]: time="2025-03-19T13:04:08.959538600Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 19 13:04:08.960273 containerd[1526]: time="2025-03-19T13:04:08.960237598Z" level=info msg="CreateContainer within sandbox \"9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 19 13:04:08.984904 containerd[1526]: time="2025-03-19T13:04:08.984829186Z" level=info msg="CreateContainer within sandbox \"9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"eb2b0f0ad436e5d5e1a7a402874c027701362b4a7e073ceba160993901831eff\"" Mar 19 13:04:08.985694 containerd[1526]: time="2025-03-19T13:04:08.985654421Z" level=info msg="StartContainer for \"eb2b0f0ad436e5d5e1a7a402874c027701362b4a7e073ceba160993901831eff\"" Mar 19 13:04:09.019818 systemd[1]: run-containerd-runc-k8s.io-eb2b0f0ad436e5d5e1a7a402874c027701362b4a7e073ceba160993901831eff-runc.klzrGn.mount: Deactivated successfully. Mar 19 13:04:09.031120 systemd[1]: Started cri-containerd-eb2b0f0ad436e5d5e1a7a402874c027701362b4a7e073ceba160993901831eff.scope - libcontainer container eb2b0f0ad436e5d5e1a7a402874c027701362b4a7e073ceba160993901831eff. Mar 19 13:04:09.065489 containerd[1526]: time="2025-03-19T13:04:09.065439758Z" level=info msg="StartContainer for \"eb2b0f0ad436e5d5e1a7a402874c027701362b4a7e073ceba160993901831eff\" returns successfully" Mar 19 13:04:09.162641 kubelet[2040]: E0319 13:04:09.162499 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:09.727754 systemd-networkd[1429]: vxlan.calico: Gained IPv6LL Mar 19 13:04:10.162858 kubelet[2040]: E0319 13:04:10.162754 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:11.164068 kubelet[2040]: E0319 13:04:11.163995 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:12.164502 kubelet[2040]: E0319 13:04:12.164133 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:12.932371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount366491024.mount: Deactivated successfully. Mar 19 13:04:13.164900 kubelet[2040]: E0319 13:04:13.164814 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:13.987925 containerd[1526]: time="2025-03-19T13:04:13.987787283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:04:13.989516 containerd[1526]: time="2025-03-19T13:04:13.989461497Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73060131" Mar 19 13:04:13.991267 containerd[1526]: time="2025-03-19T13:04:13.991182749Z" level=info msg="ImageCreate event name:\"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:04:13.995461 containerd[1526]: time="2025-03-19T13:04:13.995393610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:04:13.997010 containerd[1526]: time="2025-03-19T13:04:13.996835659Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 5.037273045s" Mar 19 13:04:13.997010 containerd[1526]: time="2025-03-19T13:04:13.996876215Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 19 13:04:13.999681 containerd[1526]: time="2025-03-19T13:04:13.999379882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 19 13:04:14.000145 containerd[1526]: time="2025-03-19T13:04:14.000071737Z" level=info msg="CreateContainer within sandbox \"efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 19 13:04:14.018579 containerd[1526]: time="2025-03-19T13:04:14.018470715Z" level=info msg="CreateContainer within sandbox \"efbc8661cdf316075bab49a10d789dc5fc63674f75df1763faa4e214f6be6581\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"852658d58acb1e024d0cd6a43a8d043bc2fa49a9073aa96d30e55db09f676500\"" Mar 19 13:04:14.019585 containerd[1526]: time="2025-03-19T13:04:14.019504931Z" level=info msg="StartContainer for \"852658d58acb1e024d0cd6a43a8d043bc2fa49a9073aa96d30e55db09f676500\"" Mar 19 13:04:14.060220 systemd[1]: run-containerd-runc-k8s.io-852658d58acb1e024d0cd6a43a8d043bc2fa49a9073aa96d30e55db09f676500-runc.PK6YWx.mount: Deactivated successfully. Mar 19 13:04:14.072232 systemd[1]: Started cri-containerd-852658d58acb1e024d0cd6a43a8d043bc2fa49a9073aa96d30e55db09f676500.scope - libcontainer container 852658d58acb1e024d0cd6a43a8d043bc2fa49a9073aa96d30e55db09f676500. Mar 19 13:04:14.100458 containerd[1526]: time="2025-03-19T13:04:14.099703667Z" level=info msg="StartContainer for \"852658d58acb1e024d0cd6a43a8d043bc2fa49a9073aa96d30e55db09f676500\" returns successfully" Mar 19 13:04:14.165966 kubelet[2040]: E0319 13:04:14.165866 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:14.411126 kubelet[2040]: I0319 13:04:14.410980 2040 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-v5gmn" podStartSLOduration=4.282064214 podStartE2EDuration="11.41096111s" podCreationTimestamp="2025-03-19 13:04:03 +0000 UTC" firstStartedPulling="2025-03-19 13:04:06.869684651 +0000 UTC m=+22.528616236" lastFinishedPulling="2025-03-19 13:04:13.998581567 +0000 UTC m=+29.657513132" observedRunningTime="2025-03-19 13:04:14.410614942 +0000 UTC m=+30.069546597" watchObservedRunningTime="2025-03-19 13:04:14.41096111 +0000 UTC m=+30.069892674" Mar 19 13:04:15.166670 kubelet[2040]: E0319 13:04:15.166606 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:16.167921 kubelet[2040]: E0319 13:04:16.167119 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:16.479836 containerd[1526]: time="2025-03-19T13:04:16.479657300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 19 13:04:16.480318 containerd[1526]: time="2025-03-19T13:04:16.479881579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:04:16.482322 containerd[1526]: time="2025-03-19T13:04:16.481169360Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:04:16.486151 containerd[1526]: time="2025-03-19T13:04:16.486106642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:04:16.486985 containerd[1526]: time="2025-03-19T13:04:16.486954640Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 2.487438634s" Mar 19 13:04:16.487061 containerd[1526]: time="2025-03-19T13:04:16.487048516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 19 13:04:16.488846 containerd[1526]: time="2025-03-19T13:04:16.488811676Z" level=info msg="CreateContainer within sandbox \"9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 19 13:04:16.507740 containerd[1526]: time="2025-03-19T13:04:16.507681128Z" level=info msg="CreateContainer within sandbox \"9d904865cf9927fe2faa400a1aa692e8afbde557fb50ee4664286ea0fecdcf7e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"65a83e1689d48a30a91013b92d0219418d6d79ac1ff6fa677f4e16e792f57a37\"" Mar 19 13:04:16.508492 containerd[1526]: time="2025-03-19T13:04:16.508462161Z" level=info msg="StartContainer for \"65a83e1689d48a30a91013b92d0219418d6d79ac1ff6fa677f4e16e792f57a37\"" Mar 19 13:04:16.538095 systemd[1]: Started cri-containerd-65a83e1689d48a30a91013b92d0219418d6d79ac1ff6fa677f4e16e792f57a37.scope - libcontainer container 65a83e1689d48a30a91013b92d0219418d6d79ac1ff6fa677f4e16e792f57a37. Mar 19 13:04:16.570971 containerd[1526]: time="2025-03-19T13:04:16.570924997Z" level=info msg="StartContainer for \"65a83e1689d48a30a91013b92d0219418d6d79ac1ff6fa677f4e16e792f57a37\" returns successfully" Mar 19 13:04:17.168190 kubelet[2040]: E0319 13:04:17.168085 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:17.255270 kubelet[2040]: I0319 13:04:17.255228 2040 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 19 13:04:17.255270 kubelet[2040]: I0319 13:04:17.255259 2040 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 19 13:04:17.425097 kubelet[2040]: I0319 13:04:17.424455 2040 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-k6pv8" podStartSLOduration=22.77778984 podStartE2EDuration="32.42442247s" podCreationTimestamp="2025-03-19 13:03:45 +0000 UTC" firstStartedPulling="2025-03-19 13:04:06.841034203 +0000 UTC m=+22.499965758" lastFinishedPulling="2025-03-19 13:04:16.487666822 +0000 UTC m=+32.146598388" observedRunningTime="2025-03-19 13:04:17.424082684 +0000 UTC m=+33.083014319" watchObservedRunningTime="2025-03-19 13:04:17.42442247 +0000 UTC m=+33.083354075" Mar 19 13:04:18.168681 kubelet[2040]: E0319 13:04:18.168591 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:19.168791 kubelet[2040]: E0319 13:04:19.168719 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:20.170032 kubelet[2040]: E0319 13:04:20.169947 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:21.170529 kubelet[2040]: E0319 13:04:21.170438 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:22.171387 kubelet[2040]: E0319 13:04:22.171316 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:23.171992 kubelet[2040]: E0319 13:04:23.171935 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:24.173098 kubelet[2040]: E0319 13:04:24.173012 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:25.138003 kubelet[2040]: E0319 13:04:25.137932 2040 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:25.173849 kubelet[2040]: E0319 13:04:25.173764 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:25.388287 systemd[1]: Created slice kubepods-besteffort-pod78f79cd3_cebf_45f3_beee_0314c16053f0.slice - libcontainer container kubepods-besteffort-pod78f79cd3_cebf_45f3_beee_0314c16053f0.slice. Mar 19 13:04:25.446539 kubelet[2040]: I0319 13:04:25.446485 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/78f79cd3-cebf-45f3-beee-0314c16053f0-data\") pod \"nfs-server-provisioner-0\" (UID: \"78f79cd3-cebf-45f3-beee-0314c16053f0\") " pod="default/nfs-server-provisioner-0" Mar 19 13:04:25.446539 kubelet[2040]: I0319 13:04:25.446546 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxdbz\" (UniqueName: \"kubernetes.io/projected/78f79cd3-cebf-45f3-beee-0314c16053f0-kube-api-access-vxdbz\") pod \"nfs-server-provisioner-0\" (UID: \"78f79cd3-cebf-45f3-beee-0314c16053f0\") " pod="default/nfs-server-provisioner-0" Mar 19 13:04:25.692658 containerd[1526]: time="2025-03-19T13:04:25.692533851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:78f79cd3-cebf-45f3-beee-0314c16053f0,Namespace:default,Attempt:0,}" Mar 19 13:04:25.870262 systemd-networkd[1429]: cali60e51b789ff: Link UP Mar 19 13:04:25.871757 systemd-networkd[1429]: cali60e51b789ff: Gained carrier Mar 19 13:04:25.886745 containerd[1526]: 2025-03-19 13:04:25.763 [INFO][3442] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 78f79cd3-cebf-45f3-beee-0314c16053f0 1830 0 2025-03-19 13:04:25 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.4 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-" Mar 19 13:04:25.886745 containerd[1526]: 2025-03-19 13:04:25.763 [INFO][3442] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 19 13:04:25.886745 containerd[1526]: 2025-03-19 13:04:25.804 [INFO][3454] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c" HandleID="k8s-pod-network.44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 19 13:04:25.886745 containerd[1526]: 2025-03-19 13:04:25.818 [INFO][3454] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c" HandleID="k8s-pod-network.44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332d50), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nfs-server-provisioner-0", "timestamp":"2025-03-19 13:04:25.804051489 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 13:04:25.886745 containerd[1526]: 2025-03-19 13:04:25.818 [INFO][3454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 13:04:25.886745 containerd[1526]: 2025-03-19 13:04:25.818 [INFO][3454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 13:04:25.886745 containerd[1526]: 2025-03-19 13:04:25.818 [INFO][3454] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Mar 19 13:04:25.886745 containerd[1526]: 2025-03-19 13:04:25.821 [INFO][3454] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c" host="10.0.0.4" Mar 19 13:04:25.886745 containerd[1526]: 2025-03-19 13:04:25.828 [INFO][3454] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Mar 19 13:04:25.886745 containerd[1526]: 2025-03-19 13:04:25.835 [INFO][3454] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Mar 19 13:04:25.886745 containerd[1526]: 2025-03-19 13:04:25.838 [INFO][3454] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Mar 19 13:04:25.886745 containerd[1526]: 2025-03-19 13:04:25.842 [INFO][3454] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Mar 19 13:04:25.886745 containerd[1526]: 2025-03-19 13:04:25.842 [INFO][3454] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c" host="10.0.0.4" Mar 19 13:04:25.886745 containerd[1526]: 2025-03-19 13:04:25.845 [INFO][3454] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c Mar 19 13:04:25.886745 containerd[1526]: 2025-03-19 13:04:25.852 [INFO][3454] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c" host="10.0.0.4" Mar 19 13:04:25.886745 containerd[1526]: 2025-03-19 13:04:25.862 [INFO][3454] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.195/26] block=192.168.99.192/26 handle="k8s-pod-network.44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c" host="10.0.0.4" Mar 19 13:04:25.886745 containerd[1526]: 2025-03-19 13:04:25.862 [INFO][3454] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.195/26] handle="k8s-pod-network.44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c" host="10.0.0.4" Mar 19 13:04:25.886745 containerd[1526]: 2025-03-19 13:04:25.862 [INFO][3454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 13:04:25.886745 containerd[1526]: 2025-03-19 13:04:25.862 [INFO][3454] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.195/26] IPv6=[] ContainerID="44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c" HandleID="k8s-pod-network.44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 19 13:04:25.888027 containerd[1526]: 2025-03-19 13:04:25.865 [INFO][3442] cni-plugin/k8s.go 386: Populated endpoint ContainerID="44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"78f79cd3-cebf-45f3-beee-0314c16053f0", ResourceVersion:"1830", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 13, 4, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 13:04:25.888027 containerd[1526]: 2025-03-19 13:04:25.865 [INFO][3442] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.195/32] ContainerID="44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 19 13:04:25.888027 containerd[1526]: 2025-03-19 13:04:25.865 [INFO][3442] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 19 13:04:25.888027 containerd[1526]: 2025-03-19 13:04:25.870 [INFO][3442] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 19 13:04:25.888327 containerd[1526]: 2025-03-19 13:04:25.870 [INFO][3442] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"78f79cd3-cebf-45f3-beee-0314c16053f0", ResourceVersion:"1830", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 13, 4, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"16:02:8d:0b:4c:37", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 13:04:25.888327 containerd[1526]: 2025-03-19 13:04:25.883 [INFO][3442] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 19 13:04:25.917587 containerd[1526]: time="2025-03-19T13:04:25.917451322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 13:04:25.917763 containerd[1526]: time="2025-03-19T13:04:25.917608938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 13:04:25.917763 containerd[1526]: time="2025-03-19T13:04:25.917666275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 13:04:25.918959 containerd[1526]: time="2025-03-19T13:04:25.918491660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 13:04:25.949270 systemd[1]: Started cri-containerd-44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c.scope - libcontainer container 44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c. Mar 19 13:04:25.995170 containerd[1526]: time="2025-03-19T13:04:25.995110810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:78f79cd3-cebf-45f3-beee-0314c16053f0,Namespace:default,Attempt:0,} returns sandbox id \"44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c\"" Mar 19 13:04:25.996980 containerd[1526]: time="2025-03-19T13:04:25.996847613Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 19 13:04:26.174398 kubelet[2040]: E0319 13:04:26.174314 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:26.557774 systemd[1]: run-containerd-runc-k8s.io-44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c-runc.OfiTNW.mount: Deactivated successfully. Mar 19 13:04:27.175659 kubelet[2040]: E0319 13:04:27.175472 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:27.263082 systemd-networkd[1429]: cali60e51b789ff: Gained IPv6LL Mar 19 13:04:28.176692 kubelet[2040]: E0319 13:04:28.176590 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:29.160612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2706629074.mount: Deactivated successfully. Mar 19 13:04:29.177393 kubelet[2040]: E0319 13:04:29.177332 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:30.177947 kubelet[2040]: E0319 13:04:30.177791 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:30.706916 containerd[1526]: time="2025-03-19T13:04:30.706824547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:04:30.708197 containerd[1526]: time="2025-03-19T13:04:30.708094545Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039476" Mar 19 13:04:30.709407 containerd[1526]: time="2025-03-19T13:04:30.709330350Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:04:30.719438 containerd[1526]: time="2025-03-19T13:04:30.719386602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:04:30.720618 containerd[1526]: time="2025-03-19T13:04:30.720506991Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.723503065s" Mar 19 13:04:30.720618 containerd[1526]: time="2025-03-19T13:04:30.720536436Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Mar 19 13:04:30.722691 containerd[1526]: time="2025-03-19T13:04:30.722637149Z" level=info msg="CreateContainer within sandbox \"44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 19 13:04:30.737480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1438232041.mount: Deactivated successfully. Mar 19 13:04:30.739598 containerd[1526]: time="2025-03-19T13:04:30.739545034Z" level=info msg="CreateContainer within sandbox \"44871b83b4c20826acf1fa2c526f7b8ef76c11cc2f6b01ad56518ee3d9f7232c\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"864d8049f9c7bbeee8ee6258c96c0ea7633aec945700683d6af7cfa488ec3130\"" Mar 19 13:04:30.740341 containerd[1526]: time="2025-03-19T13:04:30.740309706Z" level=info msg="StartContainer for \"864d8049f9c7bbeee8ee6258c96c0ea7633aec945700683d6af7cfa488ec3130\"" Mar 19 13:04:30.794130 systemd[1]: run-containerd-runc-k8s.io-864d8049f9c7bbeee8ee6258c96c0ea7633aec945700683d6af7cfa488ec3130-runc.6RtZ2a.mount: Deactivated successfully. Mar 19 13:04:30.802126 systemd[1]: Started cri-containerd-864d8049f9c7bbeee8ee6258c96c0ea7633aec945700683d6af7cfa488ec3130.scope - libcontainer container 864d8049f9c7bbeee8ee6258c96c0ea7633aec945700683d6af7cfa488ec3130. Mar 19 13:04:30.829277 containerd[1526]: time="2025-03-19T13:04:30.829160533Z" level=info msg="StartContainer for \"864d8049f9c7bbeee8ee6258c96c0ea7633aec945700683d6af7cfa488ec3130\" returns successfully" Mar 19 13:04:31.178643 kubelet[2040]: E0319 13:04:31.178579 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:31.456531 kubelet[2040]: I0319 13:04:31.456351 2040 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.731262496 podStartE2EDuration="6.456325893s" podCreationTimestamp="2025-03-19 13:04:25 +0000 UTC" firstStartedPulling="2025-03-19 13:04:25.996493079 +0000 UTC m=+41.655424644" lastFinishedPulling="2025-03-19 13:04:30.721556477 +0000 UTC m=+46.380488041" observedRunningTime="2025-03-19 13:04:31.455138769 +0000 UTC m=+47.114070355" watchObservedRunningTime="2025-03-19 13:04:31.456325893 +0000 UTC m=+47.115257498" Mar 19 13:04:32.179632 kubelet[2040]: E0319 13:04:32.179547 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:33.180671 kubelet[2040]: E0319 13:04:33.180550 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:34.181479 kubelet[2040]: E0319 13:04:34.181419 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:35.181798 kubelet[2040]: E0319 13:04:35.181707 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:36.182098 kubelet[2040]: E0319 13:04:36.182009 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:37.183023 kubelet[2040]: E0319 13:04:37.182752 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:38.183299 kubelet[2040]: E0319 13:04:38.183218 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:39.184492 kubelet[2040]: E0319 13:04:39.184389 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:40.185238 kubelet[2040]: E0319 13:04:40.185172 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:40.445589 systemd[1]: Created slice kubepods-besteffort-pod162bed9a_e270_4b5c_ac8e_31a3a20f2fa4.slice - libcontainer container kubepods-besteffort-pod162bed9a_e270_4b5c_ac8e_31a3a20f2fa4.slice. Mar 19 13:04:40.550493 kubelet[2040]: I0319 13:04:40.550351 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e34b94b0-1f0a-4e34-8009-3ff1a0d9b100\" (UniqueName: \"kubernetes.io/nfs/162bed9a-e270-4b5c-ac8e-31a3a20f2fa4-pvc-e34b94b0-1f0a-4e34-8009-3ff1a0d9b100\") pod \"test-pod-1\" (UID: \"162bed9a-e270-4b5c-ac8e-31a3a20f2fa4\") " pod="default/test-pod-1" Mar 19 13:04:40.550493 kubelet[2040]: I0319 13:04:40.550425 2040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8hr5\" (UniqueName: \"kubernetes.io/projected/162bed9a-e270-4b5c-ac8e-31a3a20f2fa4-kube-api-access-l8hr5\") pod \"test-pod-1\" (UID: \"162bed9a-e270-4b5c-ac8e-31a3a20f2fa4\") " pod="default/test-pod-1" Mar 19 13:04:40.698024 kernel: FS-Cache: Loaded Mar 19 13:04:40.768381 kernel: RPC: Registered named UNIX socket transport module. Mar 19 13:04:40.768518 kernel: RPC: Registered udp transport module. Mar 19 13:04:40.768541 kernel: RPC: Registered tcp transport module. Mar 19 13:04:40.768561 kernel: RPC: Registered tcp-with-tls transport module. Mar 19 13:04:40.769239 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 19 13:04:41.019944 kernel: NFS: Registering the id_resolver key type Mar 19 13:04:41.020099 kernel: Key type id_resolver registered Mar 19 13:04:41.023969 kernel: Key type id_legacy registered Mar 19 13:04:41.060930 nfsidmap[3662]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 19 13:04:41.065013 nfsidmap[3663]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 19 13:04:41.185874 kubelet[2040]: E0319 13:04:41.185742 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:41.351963 containerd[1526]: time="2025-03-19T13:04:41.351849013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:162bed9a-e270-4b5c-ac8e-31a3a20f2fa4,Namespace:default,Attempt:0,}" Mar 19 13:04:41.516005 systemd-networkd[1429]: cali5ec59c6bf6e: Link UP Mar 19 13:04:41.517788 systemd-networkd[1429]: cali5ec59c6bf6e: Gained carrier Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.416 [INFO][3665] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-test--pod--1-eth0 default 162bed9a-e270-4b5c-ac8e-31a3a20f2fa4 1897 0 2025-03-19 13:04:27 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-" Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.416 [INFO][3665] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.451 [INFO][3677] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc" HandleID="k8s-pod-network.74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc" Workload="10.0.0.4-k8s-test--pod--1-eth0" Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.468 [INFO][3677] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc" HandleID="k8s-pod-network.74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc" Workload="10.0.0.4-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031aa40), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"test-pod-1", "timestamp":"2025-03-19 13:04:41.45168766 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.468 [INFO][3677] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.468 [INFO][3677] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.468 [INFO][3677] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.472 [INFO][3677] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc" host="10.0.0.4" Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.479 [INFO][3677] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.486 [INFO][3677] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.489 [INFO][3677] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.493 [INFO][3677] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.493 [INFO][3677] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc" host="10.0.0.4" Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.495 [INFO][3677] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.503 [INFO][3677] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc" host="10.0.0.4" Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.511 [INFO][3677] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.196/26] block=192.168.99.192/26 handle="k8s-pod-network.74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc" host="10.0.0.4" Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.511 [INFO][3677] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.196/26] handle="k8s-pod-network.74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc" host="10.0.0.4" Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.511 [INFO][3677] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.511 [INFO][3677] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.196/26] IPv6=[] ContainerID="74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc" HandleID="k8s-pod-network.74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc" Workload="10.0.0.4-k8s-test--pod--1-eth0" Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.513 [INFO][3665] cni-plugin/k8s.go 386: Populated endpoint ContainerID="74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"162bed9a-e270-4b5c-ac8e-31a3a20f2fa4", ResourceVersion:"1897", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 13, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 13:04:41.531784 containerd[1526]: 2025-03-19 13:04:41.513 [INFO][3665] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.196/32] ContainerID="74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Mar 19 13:04:41.533500 containerd[1526]: 2025-03-19 13:04:41.513 [INFO][3665] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Mar 19 13:04:41.533500 containerd[1526]: 2025-03-19 13:04:41.516 [INFO][3665] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Mar 19 13:04:41.533500 containerd[1526]: 2025-03-19 13:04:41.517 [INFO][3665] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"162bed9a-e270-4b5c-ac8e-31a3a20f2fa4", ResourceVersion:"1897", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 13, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"aa:c7:12:ef:d3:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 13:04:41.533500 containerd[1526]: 2025-03-19 13:04:41.527 [INFO][3665] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Mar 19 13:04:41.562609 containerd[1526]: time="2025-03-19T13:04:41.562117035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 13:04:41.562609 containerd[1526]: time="2025-03-19T13:04:41.562431013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 13:04:41.562609 containerd[1526]: time="2025-03-19T13:04:41.562462452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 13:04:41.563681 containerd[1526]: time="2025-03-19T13:04:41.563584594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 13:04:41.586271 systemd[1]: Started cri-containerd-74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc.scope - libcontainer container 74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc. Mar 19 13:04:41.635241 containerd[1526]: time="2025-03-19T13:04:41.635055583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:162bed9a-e270-4b5c-ac8e-31a3a20f2fa4,Namespace:default,Attempt:0,} returns sandbox id \"74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc\"" Mar 19 13:04:41.636594 containerd[1526]: time="2025-03-19T13:04:41.636450876Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 19 13:04:42.135068 containerd[1526]: time="2025-03-19T13:04:42.134988926Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 13:04:42.136241 containerd[1526]: time="2025-03-19T13:04:42.136184566Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Mar 19 13:04:42.139638 containerd[1526]: time="2025-03-19T13:04:42.139580067Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 503.095025ms" Mar 19 13:04:42.139638 containerd[1526]: time="2025-03-19T13:04:42.139630110Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 19 13:04:42.143326 containerd[1526]: time="2025-03-19T13:04:42.143271181Z" level=info msg="CreateContainer within sandbox \"74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 19 13:04:42.159426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3547469254.mount: Deactivated successfully. Mar 19 13:04:42.162872 containerd[1526]: time="2025-03-19T13:04:42.162815501Z" level=info msg="CreateContainer within sandbox \"74223adfc63187ff5e43da8b8cd3b77a5e8942c2150be69539f024a8ee5ed5bc\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"cd2707a1bddca73a63665e7510edfe39273857348ffc20ac8f9a44a472f05c6e\"" Mar 19 13:04:42.163792 containerd[1526]: time="2025-03-19T13:04:42.163765953Z" level=info msg="StartContainer for \"cd2707a1bddca73a63665e7510edfe39273857348ffc20ac8f9a44a472f05c6e\"" Mar 19 13:04:42.187026 kubelet[2040]: E0319 13:04:42.186954 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:42.203220 systemd[1]: Started cri-containerd-cd2707a1bddca73a63665e7510edfe39273857348ffc20ac8f9a44a472f05c6e.scope - libcontainer container cd2707a1bddca73a63665e7510edfe39273857348ffc20ac8f9a44a472f05c6e. Mar 19 13:04:42.234836 containerd[1526]: time="2025-03-19T13:04:42.234360421Z" level=info msg="StartContainer for \"cd2707a1bddca73a63665e7510edfe39273857348ffc20ac8f9a44a472f05c6e\" returns successfully" Mar 19 13:04:42.488023 kubelet[2040]: I0319 13:04:42.487674 2040 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=14.983291175 podStartE2EDuration="15.487653695s" podCreationTimestamp="2025-03-19 13:04:27 +0000 UTC" firstStartedPulling="2025-03-19 13:04:41.636020822 +0000 UTC m=+57.294952397" lastFinishedPulling="2025-03-19 13:04:42.140383352 +0000 UTC m=+57.799314917" observedRunningTime="2025-03-19 13:04:42.487595416 +0000 UTC m=+58.146526981" watchObservedRunningTime="2025-03-19 13:04:42.487653695 +0000 UTC m=+58.146585270" Mar 19 13:04:42.687372 systemd-networkd[1429]: cali5ec59c6bf6e: Gained IPv6LL Mar 19 13:04:43.187477 kubelet[2040]: E0319 13:04:43.187349 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:44.187556 kubelet[2040]: E0319 13:04:44.187479 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:45.138196 kubelet[2040]: E0319 13:04:45.138111 2040 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:45.158599 containerd[1526]: time="2025-03-19T13:04:45.158548025Z" level=info msg="StopPodSandbox for \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\"" Mar 19 13:04:45.159225 containerd[1526]: time="2025-03-19T13:04:45.158679231Z" level=info msg="TearDown network for sandbox \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\" successfully" Mar 19 13:04:45.159225 containerd[1526]: time="2025-03-19T13:04:45.158693267Z" level=info msg="StopPodSandbox for \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\" returns successfully" Mar 19 13:04:45.163041 containerd[1526]: time="2025-03-19T13:04:45.162969728Z" level=info msg="RemovePodSandbox for \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\"" Mar 19 13:04:45.172675 containerd[1526]: time="2025-03-19T13:04:45.172613554Z" level=info msg="Forcibly stopping sandbox \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\"" Mar 19 13:04:45.180907 containerd[1526]: time="2025-03-19T13:04:45.172778893Z" level=info msg="TearDown network for sandbox \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\" successfully" Mar 19 13:04:45.190022 kubelet[2040]: E0319 13:04:45.189950 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:45.219509 containerd[1526]: time="2025-03-19T13:04:45.219415068Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 13:04:45.219509 containerd[1526]: time="2025-03-19T13:04:45.219522991Z" level=info msg="RemovePodSandbox \"d635691f3fc347b9b0dde097ad6f8f1fb7a158f959a5244181e8923ba69ec9cd\" returns successfully" Mar 19 13:04:45.220384 containerd[1526]: time="2025-03-19T13:04:45.220300467Z" level=info msg="StopPodSandbox for \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\"" Mar 19 13:04:45.220606 containerd[1526]: time="2025-03-19T13:04:45.220413919Z" level=info msg="TearDown network for sandbox \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\" successfully" Mar 19 13:04:45.220606 containerd[1526]: time="2025-03-19T13:04:45.220424158Z" level=info msg="StopPodSandbox for \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\" returns successfully" Mar 19 13:04:45.220861 containerd[1526]: time="2025-03-19T13:04:45.220788962Z" level=info msg="RemovePodSandbox for \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\"" Mar 19 13:04:45.220861 containerd[1526]: time="2025-03-19T13:04:45.220817415Z" level=info msg="Forcibly stopping sandbox \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\"" Mar 19 13:04:45.221062 containerd[1526]: time="2025-03-19T13:04:45.220989057Z" level=info msg="TearDown network for sandbox \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\" successfully" Mar 19 13:04:45.224752 containerd[1526]: time="2025-03-19T13:04:45.224669692Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 13:04:45.224752 containerd[1526]: time="2025-03-19T13:04:45.224746746Z" level=info msg="RemovePodSandbox \"ac05139730e33a04fc4e5ac6b41d7a84ae28ef2fb195db9e52d50f5ac83610ce\" returns successfully" Mar 19 13:04:45.225409 containerd[1526]: time="2025-03-19T13:04:45.225341119Z" level=info msg="StopPodSandbox for \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\"" Mar 19 13:04:45.225565 containerd[1526]: time="2025-03-19T13:04:45.225486151Z" level=info msg="TearDown network for sandbox \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\" successfully" Mar 19 13:04:45.225565 containerd[1526]: time="2025-03-19T13:04:45.225503093Z" level=info msg="StopPodSandbox for \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\" returns successfully" Mar 19 13:04:45.226200 containerd[1526]: time="2025-03-19T13:04:45.225981338Z" level=info msg="RemovePodSandbox for \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\"" Mar 19 13:04:45.226200 containerd[1526]: time="2025-03-19T13:04:45.226014852Z" level=info msg="Forcibly stopping sandbox \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\"" Mar 19 13:04:45.226200 containerd[1526]: time="2025-03-19T13:04:45.226104680Z" level=info msg="TearDown network for sandbox \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\" successfully" Mar 19 13:04:45.229721 containerd[1526]: time="2025-03-19T13:04:45.229645282Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 13:04:45.229721 containerd[1526]: time="2025-03-19T13:04:45.229733106Z" level=info msg="RemovePodSandbox \"d21768dd09c99ebac14c485c0bfc5db98e4dcfefd2663629c184731b20466105\" returns successfully" Mar 19 13:04:45.230583 containerd[1526]: time="2025-03-19T13:04:45.230353348Z" level=info msg="StopPodSandbox for \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\"" Mar 19 13:04:45.230583 containerd[1526]: time="2025-03-19T13:04:45.230502298Z" level=info msg="TearDown network for sandbox \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\" successfully" Mar 19 13:04:45.230583 containerd[1526]: time="2025-03-19T13:04:45.230516464Z" level=info msg="StopPodSandbox for \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\" returns successfully" Mar 19 13:04:45.230959 containerd[1526]: time="2025-03-19T13:04:45.230912747Z" level=info msg="RemovePodSandbox for \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\"" Mar 19 13:04:45.231044 containerd[1526]: time="2025-03-19T13:04:45.231009418Z" level=info msg="Forcibly stopping sandbox \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\"" Mar 19 13:04:45.231518 containerd[1526]: time="2025-03-19T13:04:45.231222316Z" level=info msg="TearDown network for sandbox \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\" successfully" Mar 19 13:04:45.234468 containerd[1526]: time="2025-03-19T13:04:45.234397355Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 13:04:45.234468 containerd[1526]: time="2025-03-19T13:04:45.234466203Z" level=info msg="RemovePodSandbox \"ec450df9d9a34aab2722d84764369ef3ae4ad6340de88deebd0b96e3d06a32c3\" returns successfully" Mar 19 13:04:45.235173 containerd[1526]: time="2025-03-19T13:04:45.235018318Z" level=info msg="StopPodSandbox for \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\"" Mar 19 13:04:45.235173 containerd[1526]: time="2025-03-19T13:04:45.235118816Z" level=info msg="TearDown network for sandbox \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\" successfully" Mar 19 13:04:45.235173 containerd[1526]: time="2025-03-19T13:04:45.235132212Z" level=info msg="StopPodSandbox for \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\" returns successfully" Mar 19 13:04:45.235660 containerd[1526]: time="2025-03-19T13:04:45.235610016Z" level=info msg="RemovePodSandbox for \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\"" Mar 19 13:04:45.235660 containerd[1526]: time="2025-03-19T13:04:45.235661884Z" level=info msg="Forcibly stopping sandbox \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\"" Mar 19 13:04:45.235818 containerd[1526]: time="2025-03-19T13:04:45.235769636Z" level=info msg="TearDown network for sandbox \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\" successfully" Mar 19 13:04:45.247266 containerd[1526]: time="2025-03-19T13:04:45.247204517Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 13:04:45.247266 containerd[1526]: time="2025-03-19T13:04:45.247263828Z" level=info msg="RemovePodSandbox \"7fdad75f94ac85eaf4350f7ebfecf659e13f4ae46fdb3c7e4836421e01e79119\" returns successfully" Mar 19 13:04:45.247842 containerd[1526]: time="2025-03-19T13:04:45.247768794Z" level=info msg="StopPodSandbox for \"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\"" Mar 19 13:04:45.248010 containerd[1526]: time="2025-03-19T13:04:45.247936517Z" level=info msg="TearDown network for sandbox \"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\" successfully" Mar 19 13:04:45.248010 containerd[1526]: time="2025-03-19T13:04:45.247952277Z" level=info msg="StopPodSandbox for \"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\" returns successfully" Mar 19 13:04:45.248394 containerd[1526]: time="2025-03-19T13:04:45.248338211Z" level=info msg="RemovePodSandbox for \"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\"" Mar 19 13:04:45.248394 containerd[1526]: time="2025-03-19T13:04:45.248374809Z" level=info msg="Forcibly stopping sandbox \"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\"" Mar 19 13:04:45.248502 containerd[1526]: time="2025-03-19T13:04:45.248442406Z" level=info msg="TearDown network for sandbox \"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\" successfully" Mar 19 13:04:45.253067 containerd[1526]: time="2025-03-19T13:04:45.253016955Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 13:04:45.253748 containerd[1526]: time="2025-03-19T13:04:45.253288023Z" level=info msg="RemovePodSandbox \"26d6a3a39347cfd102b7368fe4ca4e6dc1898d7681e517253b94feee072eb67e\" returns successfully" Mar 19 13:04:45.253818 containerd[1526]: time="2025-03-19T13:04:45.253774865Z" level=info msg="StopPodSandbox for \"eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62\"" Mar 19 13:04:45.253949 containerd[1526]: time="2025-03-19T13:04:45.253882116Z" level=info msg="TearDown network for sandbox \"eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62\" successfully" Mar 19 13:04:45.253949 containerd[1526]: time="2025-03-19T13:04:45.253937260Z" level=info msg="StopPodSandbox for \"eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62\" returns successfully" Mar 19 13:04:45.254525 containerd[1526]: time="2025-03-19T13:04:45.254491728Z" level=info msg="RemovePodSandbox for \"eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62\"" Mar 19 13:04:45.254525 containerd[1526]: time="2025-03-19T13:04:45.254519079Z" level=info msg="Forcibly stopping sandbox \"eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62\"" Mar 19 13:04:45.254631 containerd[1526]: time="2025-03-19T13:04:45.254585414Z" level=info msg="TearDown network for sandbox \"eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62\" successfully" Mar 19 13:04:45.259586 containerd[1526]: time="2025-03-19T13:04:45.259110901Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 13:04:45.259586 containerd[1526]: time="2025-03-19T13:04:45.259196812Z" level=info msg="RemovePodSandbox \"eed76a7ffe4484747adc1202837895429a01d47e64477962aa087e926b45dc62\" returns successfully" Mar 19 13:04:45.259847 containerd[1526]: time="2025-03-19T13:04:45.259807336Z" level=info msg="StopPodSandbox for \"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\"" Mar 19 13:04:45.260004 containerd[1526]: time="2025-03-19T13:04:45.259951886Z" level=info msg="TearDown network for sandbox \"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\" successfully" Mar 19 13:04:45.260004 containerd[1526]: time="2025-03-19T13:04:45.259975921Z" level=info msg="StopPodSandbox for \"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\" returns successfully" Mar 19 13:04:45.260454 containerd[1526]: time="2025-03-19T13:04:45.260413781Z" level=info msg="RemovePodSandbox for \"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\"" Mar 19 13:04:45.260454 containerd[1526]: time="2025-03-19T13:04:45.260442866Z" level=info msg="Forcibly stopping sandbox \"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\"" Mar 19 13:04:45.260595 containerd[1526]: time="2025-03-19T13:04:45.260511515Z" level=info msg="TearDown network for sandbox \"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\" successfully" Mar 19 13:04:45.265107 containerd[1526]: time="2025-03-19T13:04:45.265033636Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 13:04:45.265323 containerd[1526]: time="2025-03-19T13:04:45.265114878Z" level=info msg="RemovePodSandbox \"997fce248399f14a21fdd3e2bbbd6e960fceb4a3e19f7579c0175c2d3dfd48c1\" returns successfully" Mar 19 13:04:45.265672 containerd[1526]: time="2025-03-19T13:04:45.265641294Z" level=info msg="StopPodSandbox for \"7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0\"" Mar 19 13:04:45.265770 containerd[1526]: time="2025-03-19T13:04:45.265747453Z" level=info msg="TearDown network for sandbox \"7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0\" successfully" Mar 19 13:04:45.265770 containerd[1526]: time="2025-03-19T13:04:45.265760568Z" level=info msg="StopPodSandbox for \"7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0\" returns successfully" Mar 19 13:04:45.266170 containerd[1526]: time="2025-03-19T13:04:45.266117706Z" level=info msg="RemovePodSandbox for \"7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0\"" Mar 19 13:04:45.266215 containerd[1526]: time="2025-03-19T13:04:45.266169463Z" level=info msg="Forcibly stopping sandbox \"7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0\"" Mar 19 13:04:45.266289 containerd[1526]: time="2025-03-19T13:04:45.266236789Z" level=info msg="TearDown network for sandbox \"7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0\" successfully" Mar 19 13:04:45.270039 containerd[1526]: time="2025-03-19T13:04:45.269962129Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 13:04:45.270039 containerd[1526]: time="2025-03-19T13:04:45.270042329Z" level=info msg="RemovePodSandbox \"7c1941817e600b1fc4d4393538551cedf2df3e145f4df15ad2194c6bafb4b9f0\" returns successfully" Mar 19 13:04:46.190195 kubelet[2040]: E0319 13:04:46.190103 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:47.191291 kubelet[2040]: E0319 13:04:47.191223 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:48.192168 kubelet[2040]: E0319 13:04:48.192090 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:49.192497 kubelet[2040]: E0319 13:04:49.192373 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:50.193315 kubelet[2040]: E0319 13:04:50.192738 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:51.193361 kubelet[2040]: E0319 13:04:51.193271 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:52.194113 kubelet[2040]: E0319 13:04:52.194011 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:53.194405 kubelet[2040]: E0319 13:04:53.194299 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:54.195584 kubelet[2040]: E0319 13:04:54.195517 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:55.196235 kubelet[2040]: E0319 13:04:55.196086 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:56.197352 kubelet[2040]: E0319 13:04:56.197273 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:57.197844 kubelet[2040]: E0319 13:04:57.197763 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:58.198394 kubelet[2040]: E0319 13:04:58.198278 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:04:59.199454 kubelet[2040]: E0319 13:04:59.199387 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:05:00.200524 kubelet[2040]: E0319 13:05:00.200447 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:05:01.201026 kubelet[2040]: E0319 13:05:01.200958 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:05:02.201956 kubelet[2040]: E0319 13:05:02.201864 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:05:03.202616 kubelet[2040]: E0319 13:05:03.202504 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:05:04.203090 kubelet[2040]: E0319 13:05:04.203011 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:05:05.138002 kubelet[2040]: E0319 13:05:05.137922 2040 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 13:05:05.204204 kubelet[2040]: E0319 13:05:05.204113 2040 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"