Mar 19 11:43:30.963095 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Mar 19 10:13:43 -00 2025 Mar 19 11:43:30.963125 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=08c32ef14ad6302a92b1d281c48443f5b56d59f0d37d38df628e5b6f012967bc Mar 19 11:43:30.963138 kernel: BIOS-provided physical RAM map: Mar 19 11:43:30.963146 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 19 11:43:30.963152 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 19 11:43:30.963159 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 19 11:43:30.963168 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Mar 19 11:43:30.963175 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Mar 19 11:43:30.963182 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 19 11:43:30.963189 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 19 11:43:30.963199 kernel: NX (Execute Disable) protection: active Mar 19 11:43:30.963210 kernel: APIC: Static calls initialized Mar 19 11:43:30.963218 kernel: SMBIOS 2.8 present. Mar 19 11:43:30.963225 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Mar 19 11:43:30.963234 kernel: Hypervisor detected: KVM Mar 19 11:43:30.963242 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 19 11:43:30.963255 kernel: kvm-clock: using sched offset of 3379096067 cycles Mar 19 11:43:30.963264 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 19 11:43:30.963273 kernel: tsc: Detected 2494.140 MHz processor Mar 19 11:43:30.963281 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 19 11:43:30.965352 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 19 11:43:30.965364 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Mar 19 11:43:30.965374 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 19 11:43:30.965383 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 19 11:43:30.965401 kernel: ACPI: Early table checksum verification disabled Mar 19 11:43:30.965409 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Mar 19 11:43:30.965417 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:30.965426 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:30.965434 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:30.965445 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 19 11:43:30.965457 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:30.965469 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:30.965481 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:30.965497 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:30.965505 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Mar 19 11:43:30.965513 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Mar 19 11:43:30.965521 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 19 11:43:30.965530 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Mar 19 11:43:30.965537 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Mar 19 11:43:30.965546 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Mar 19 11:43:30.965558 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Mar 19 11:43:30.965575 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 19 11:43:30.965596 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 19 11:43:30.965608 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 19 11:43:30.965620 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 19 11:43:30.965633 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Mar 19 11:43:30.965645 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Mar 19 11:43:30.965663 kernel: Zone ranges: Mar 19 11:43:30.965675 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 19 11:43:30.965687 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Mar 19 11:43:30.965698 kernel: Normal empty Mar 19 11:43:30.965711 kernel: Movable zone start for each node Mar 19 11:43:30.965722 kernel: Early memory node ranges Mar 19 11:43:30.965733 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 19 11:43:30.965745 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Mar 19 11:43:30.965756 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Mar 19 11:43:30.965767 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 19 11:43:30.965783 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 19 11:43:30.965800 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Mar 19 11:43:30.965812 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 19 11:43:30.965823 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 19 11:43:30.965836 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 19 11:43:30.965848 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 19 11:43:30.965859 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 19 11:43:30.965871 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 19 11:43:30.965882 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 19 11:43:30.965899 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 19 11:43:30.965910 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 19 11:43:30.965922 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 19 11:43:30.965935 kernel: TSC deadline timer available Mar 19 11:43:30.965948 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 19 11:43:30.965961 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 19 11:43:30.965975 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Mar 19 11:43:30.966013 kernel: Booting paravirtualized kernel on KVM Mar 19 11:43:30.966023 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 19 11:43:30.966042 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 19 11:43:30.966054 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 19 11:43:30.966067 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 19 11:43:30.966080 kernel: pcpu-alloc: [0] 0 1 Mar 19 11:43:30.966092 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 19 11:43:30.966106 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=08c32ef14ad6302a92b1d281c48443f5b56d59f0d37d38df628e5b6f012967bc Mar 19 11:43:30.966115 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 19 11:43:30.966123 kernel: random: crng init done Mar 19 11:43:30.966141 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 19 11:43:30.966154 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 19 11:43:30.966166 kernel: Fallback order for Node 0: 0 Mar 19 11:43:30.966178 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Mar 19 11:43:30.966207 kernel: Policy zone: DMA32 Mar 19 11:43:30.966216 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 19 11:43:30.966225 kernel: Memory: 1969156K/2096612K available (14336K kernel code, 2303K rwdata, 22860K rodata, 43480K init, 1592K bss, 127196K reserved, 0K cma-reserved) Mar 19 11:43:30.966233 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 19 11:43:30.966242 kernel: Kernel/User page tables isolation: enabled Mar 19 11:43:30.966254 kernel: ftrace: allocating 37910 entries in 149 pages Mar 19 11:43:30.966263 kernel: ftrace: allocated 149 pages with 4 groups Mar 19 11:43:30.966271 kernel: Dynamic Preempt: voluntary Mar 19 11:43:30.966280 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 19 11:43:30.966318 kernel: rcu: RCU event tracing is enabled. Mar 19 11:43:30.966331 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 19 11:43:30.966343 kernel: Trampoline variant of Tasks RCU enabled. Mar 19 11:43:30.966354 kernel: Rude variant of Tasks RCU enabled. Mar 19 11:43:30.966366 kernel: Tracing variant of Tasks RCU enabled. Mar 19 11:43:30.966383 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 19 11:43:30.966395 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 19 11:43:30.966406 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 19 11:43:30.966423 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 19 11:43:30.966439 kernel: Console: colour VGA+ 80x25 Mar 19 11:43:30.966451 kernel: printk: console [tty0] enabled Mar 19 11:43:30.966459 kernel: printk: console [ttyS0] enabled Mar 19 11:43:30.966468 kernel: ACPI: Core revision 20230628 Mar 19 11:43:30.966476 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 19 11:43:30.966490 kernel: APIC: Switch to symmetric I/O mode setup Mar 19 11:43:30.966502 kernel: x2apic enabled Mar 19 11:43:30.966515 kernel: APIC: Switched APIC routing to: physical x2apic Mar 19 11:43:30.966528 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 19 11:43:30.966539 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Mar 19 11:43:30.966550 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Mar 19 11:43:30.966560 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 19 11:43:30.966569 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 19 11:43:30.966590 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 19 11:43:30.966599 kernel: Spectre V2 : Mitigation: Retpolines Mar 19 11:43:30.966608 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 19 11:43:30.966617 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 19 11:43:30.966634 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 19 11:43:30.966649 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 19 11:43:30.966662 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 19 11:43:30.966675 kernel: MDS: Mitigation: Clear CPU buffers Mar 19 11:43:30.966689 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 19 11:43:30.966712 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 19 11:43:30.966724 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 19 11:43:30.966738 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 19 11:43:30.966752 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 19 11:43:30.966764 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 19 11:43:30.966778 kernel: Freeing SMP alternatives memory: 32K Mar 19 11:43:30.966791 kernel: pid_max: default: 32768 minimum: 301 Mar 19 11:43:30.966804 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 19 11:43:30.966822 kernel: landlock: Up and running. Mar 19 11:43:30.966834 kernel: SELinux: Initializing. Mar 19 11:43:30.966847 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 19 11:43:30.966860 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 19 11:43:30.966874 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Mar 19 11:43:30.966887 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 19 11:43:30.966902 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 19 11:43:30.966911 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 19 11:43:30.966920 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Mar 19 11:43:30.966934 kernel: signal: max sigframe size: 1776 Mar 19 11:43:30.966943 kernel: rcu: Hierarchical SRCU implementation. Mar 19 11:43:30.966953 kernel: rcu: Max phase no-delay instances is 400. Mar 19 11:43:30.966962 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 19 11:43:30.966974 kernel: smp: Bringing up secondary CPUs ... Mar 19 11:43:30.966985 kernel: smpboot: x86: Booting SMP configuration: Mar 19 11:43:30.966995 kernel: .... node #0, CPUs: #1 Mar 19 11:43:30.967008 kernel: smp: Brought up 1 node, 2 CPUs Mar 19 11:43:30.967022 kernel: smpboot: Max logical packages: 1 Mar 19 11:43:30.967035 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Mar 19 11:43:30.967044 kernel: devtmpfs: initialized Mar 19 11:43:30.967054 kernel: x86/mm: Memory block size: 128MB Mar 19 11:43:30.967063 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 19 11:43:30.967072 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 19 11:43:30.967081 kernel: pinctrl core: initialized pinctrl subsystem Mar 19 11:43:30.967094 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 19 11:43:30.967104 kernel: audit: initializing netlink subsys (disabled) Mar 19 11:43:30.967113 kernel: audit: type=2000 audit(1742384609.873:1): state=initialized audit_enabled=0 res=1 Mar 19 11:43:30.967125 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 19 11:43:30.967138 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 19 11:43:30.967152 kernel: cpuidle: using governor menu Mar 19 11:43:30.967165 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 19 11:43:30.967178 kernel: dca service started, version 1.12.1 Mar 19 11:43:30.967192 kernel: PCI: Using configuration type 1 for base access Mar 19 11:43:30.967203 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 19 11:43:30.967213 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 19 11:43:30.967224 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 19 11:43:30.967244 kernel: ACPI: Added _OSI(Module Device) Mar 19 11:43:30.967257 kernel: ACPI: Added _OSI(Processor Device) Mar 19 11:43:30.967270 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 19 11:43:30.967300 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 19 11:43:30.967309 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 19 11:43:30.967339 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 19 11:43:30.967351 kernel: ACPI: Interpreter enabled Mar 19 11:43:30.967360 kernel: ACPI: PM: (supports S0 S5) Mar 19 11:43:30.967369 kernel: ACPI: Using IOAPIC for interrupt routing Mar 19 11:43:30.967386 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 19 11:43:30.967398 kernel: PCI: Using E820 reservations for host bridge windows Mar 19 11:43:30.967415 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 19 11:43:30.967428 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 19 11:43:30.967773 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 19 11:43:30.967911 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 19 11:43:30.968032 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 19 11:43:30.968055 kernel: acpiphp: Slot [3] registered Mar 19 11:43:30.968068 kernel: acpiphp: Slot [4] registered Mar 19 11:43:30.968081 kernel: acpiphp: Slot [5] registered Mar 19 11:43:30.968094 kernel: acpiphp: Slot [6] registered Mar 19 11:43:30.968108 kernel: acpiphp: Slot [7] registered Mar 19 11:43:30.968119 kernel: acpiphp: Slot [8] registered Mar 19 11:43:30.968128 kernel: acpiphp: Slot [9] registered Mar 19 11:43:30.968137 kernel: acpiphp: Slot [10] registered Mar 19 11:43:30.968146 kernel: acpiphp: Slot [11] registered Mar 19 11:43:30.968155 kernel: acpiphp: Slot [12] registered Mar 19 11:43:30.968168 kernel: acpiphp: Slot [13] registered Mar 19 11:43:30.968177 kernel: acpiphp: Slot [14] registered Mar 19 11:43:30.968186 kernel: acpiphp: Slot [15] registered Mar 19 11:43:30.968195 kernel: acpiphp: Slot [16] registered Mar 19 11:43:30.968204 kernel: acpiphp: Slot [17] registered Mar 19 11:43:30.968213 kernel: acpiphp: Slot [18] registered Mar 19 11:43:30.968222 kernel: acpiphp: Slot [19] registered Mar 19 11:43:30.968231 kernel: acpiphp: Slot [20] registered Mar 19 11:43:30.968245 kernel: acpiphp: Slot [21] registered Mar 19 11:43:30.968264 kernel: acpiphp: Slot [22] registered Mar 19 11:43:30.968277 kernel: acpiphp: Slot [23] registered Mar 19 11:43:30.970362 kernel: acpiphp: Slot [24] registered Mar 19 11:43:30.970390 kernel: acpiphp: Slot [25] registered Mar 19 11:43:30.970406 kernel: acpiphp: Slot [26] registered Mar 19 11:43:30.970422 kernel: acpiphp: Slot [27] registered Mar 19 11:43:30.970438 kernel: acpiphp: Slot [28] registered Mar 19 11:43:30.970454 kernel: acpiphp: Slot [29] registered Mar 19 11:43:30.970470 kernel: acpiphp: Slot [30] registered Mar 19 11:43:30.970485 kernel: acpiphp: Slot [31] registered Mar 19 11:43:30.970511 kernel: PCI host bridge to bus 0000:00 Mar 19 11:43:30.970785 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 19 11:43:30.970942 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 19 11:43:30.971075 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 19 11:43:30.971196 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 19 11:43:30.973393 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Mar 19 11:43:30.973530 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 19 11:43:30.973681 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 19 11:43:30.973818 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 19 11:43:30.973926 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Mar 19 11:43:30.974022 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Mar 19 11:43:30.974141 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 19 11:43:30.974310 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 19 11:43:30.974477 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 19 11:43:30.974593 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 19 11:43:30.974726 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Mar 19 11:43:30.974871 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Mar 19 11:43:30.974984 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 19 11:43:30.975080 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Mar 19 11:43:30.975185 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Mar 19 11:43:30.975337 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Mar 19 11:43:30.975481 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Mar 19 11:43:30.975645 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Mar 19 11:43:30.975755 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Mar 19 11:43:30.975851 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Mar 19 11:43:30.975984 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 19 11:43:30.976111 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 19 11:43:30.976207 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Mar 19 11:43:30.978407 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Mar 19 11:43:30.978558 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Mar 19 11:43:30.978682 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 19 11:43:30.978826 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Mar 19 11:43:30.978930 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Mar 19 11:43:30.979037 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Mar 19 11:43:30.979157 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Mar 19 11:43:30.979281 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Mar 19 11:43:30.979392 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Mar 19 11:43:30.979488 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Mar 19 11:43:30.979713 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Mar 19 11:43:30.979878 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Mar 19 11:43:30.980005 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Mar 19 11:43:30.980152 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Mar 19 11:43:30.982404 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Mar 19 11:43:30.982582 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Mar 19 11:43:30.982687 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Mar 19 11:43:30.982832 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Mar 19 11:43:30.982971 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Mar 19 11:43:30.983133 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Mar 19 11:43:30.983267 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Mar 19 11:43:30.983301 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 19 11:43:30.983311 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 19 11:43:30.983327 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 19 11:43:30.983340 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 19 11:43:30.983352 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 19 11:43:30.983374 kernel: iommu: Default domain type: Translated Mar 19 11:43:30.983386 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 19 11:43:30.983398 kernel: PCI: Using ACPI for IRQ routing Mar 19 11:43:30.983412 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 19 11:43:30.983424 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 19 11:43:30.983436 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Mar 19 11:43:30.983615 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Mar 19 11:43:30.983763 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Mar 19 11:43:30.984014 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 19 11:43:30.984086 kernel: vgaarb: loaded Mar 19 11:43:30.984123 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 19 11:43:30.984234 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 19 11:43:30.984274 kernel: clocksource: Switched to clocksource kvm-clock Mar 19 11:43:30.986408 kernel: VFS: Disk quotas dquot_6.6.0 Mar 19 11:43:30.986423 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 19 11:43:30.986433 kernel: pnp: PnP ACPI init Mar 19 11:43:30.986442 kernel: pnp: PnP ACPI: found 4 devices Mar 19 11:43:30.986452 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 19 11:43:30.986472 kernel: NET: Registered PF_INET protocol family Mar 19 11:43:30.986482 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 19 11:43:30.986491 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 19 11:43:30.986501 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 19 11:43:30.986510 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 19 11:43:30.986519 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 19 11:43:30.986529 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 19 11:43:30.986538 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 19 11:43:30.986550 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 19 11:43:30.986559 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 19 11:43:30.986568 kernel: NET: Registered PF_XDP protocol family Mar 19 11:43:30.986735 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 19 11:43:30.986877 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 19 11:43:30.987074 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 19 11:43:30.987172 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 19 11:43:30.987258 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Mar 19 11:43:30.988540 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Mar 19 11:43:30.988678 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 19 11:43:30.988694 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 19 11:43:30.988794 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 40623 usecs Mar 19 11:43:30.988807 kernel: PCI: CLS 0 bytes, default 64 Mar 19 11:43:30.988817 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 19 11:43:30.988827 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Mar 19 11:43:30.988836 kernel: Initialise system trusted keyrings Mar 19 11:43:30.988846 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 19 11:43:30.988865 kernel: Key type asymmetric registered Mar 19 11:43:30.988877 kernel: Asymmetric key parser 'x509' registered Mar 19 11:43:30.988889 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 19 11:43:30.988902 kernel: io scheduler mq-deadline registered Mar 19 11:43:30.988916 kernel: io scheduler kyber registered Mar 19 11:43:30.988930 kernel: io scheduler bfq registered Mar 19 11:43:30.988945 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 19 11:43:30.988955 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Mar 19 11:43:30.988964 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 19 11:43:30.988977 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 19 11:43:30.988986 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 19 11:43:30.988995 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 19 11:43:30.989005 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 19 11:43:30.989015 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 19 11:43:30.989024 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 19 11:43:30.989167 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 19 11:43:30.989187 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 19 11:43:30.989343 kernel: rtc_cmos 00:03: registered as rtc0 Mar 19 11:43:30.989466 kernel: rtc_cmos 00:03: setting system clock to 2025-03-19T11:43:30 UTC (1742384610) Mar 19 11:43:30.989557 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 19 11:43:30.989569 kernel: intel_pstate: CPU model not supported Mar 19 11:43:30.989579 kernel: NET: Registered PF_INET6 protocol family Mar 19 11:43:30.989588 kernel: Segment Routing with IPv6 Mar 19 11:43:30.989598 kernel: In-situ OAM (IOAM) with IPv6 Mar 19 11:43:30.989607 kernel: NET: Registered PF_PACKET protocol family Mar 19 11:43:30.989616 kernel: Key type dns_resolver registered Mar 19 11:43:30.989632 kernel: IPI shorthand broadcast: enabled Mar 19 11:43:30.989642 kernel: sched_clock: Marking stable (955004406, 100835174)->(1191805182, -135965602) Mar 19 11:43:30.989651 kernel: registered taskstats version 1 Mar 19 11:43:30.989661 kernel: Loading compiled-in X.509 certificates Mar 19 11:43:30.989670 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: ea8d6696bd19c98b32173a761210456cdad6b56b' Mar 19 11:43:30.989679 kernel: Key type .fscrypt registered Mar 19 11:43:30.989688 kernel: Key type fscrypt-provisioning registered Mar 19 11:43:30.989697 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 19 11:43:30.989710 kernel: ima: Allocated hash algorithm: sha1 Mar 19 11:43:30.989719 kernel: ima: No architecture policies found Mar 19 11:43:30.989728 kernel: clk: Disabling unused clocks Mar 19 11:43:30.989737 kernel: Freeing unused kernel image (initmem) memory: 43480K Mar 19 11:43:30.989746 kernel: Write protecting the kernel read-only data: 38912k Mar 19 11:43:30.989773 kernel: Freeing unused kernel image (rodata/data gap) memory: 1716K Mar 19 11:43:30.989785 kernel: Run /init as init process Mar 19 11:43:30.989795 kernel: with arguments: Mar 19 11:43:30.989805 kernel: /init Mar 19 11:43:30.989814 kernel: with environment: Mar 19 11:43:30.989826 kernel: HOME=/ Mar 19 11:43:30.989836 kernel: TERM=linux Mar 19 11:43:30.989845 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 19 11:43:30.989856 systemd[1]: Successfully made /usr/ read-only. Mar 19 11:43:30.989871 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:43:30.989886 systemd[1]: Detected virtualization kvm. Mar 19 11:43:30.989900 systemd[1]: Detected architecture x86-64. Mar 19 11:43:30.989917 systemd[1]: Running in initrd. Mar 19 11:43:30.989932 systemd[1]: No hostname configured, using default hostname. Mar 19 11:43:30.989946 systemd[1]: Hostname set to . Mar 19 11:43:30.989961 systemd[1]: Initializing machine ID from VM UUID. Mar 19 11:43:30.989976 systemd[1]: Queued start job for default target initrd.target. Mar 19 11:43:30.989992 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:43:30.990007 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:43:30.990024 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 19 11:43:30.990040 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:43:30.990050 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 19 11:43:30.990061 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 19 11:43:30.990072 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 19 11:43:30.990082 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 19 11:43:30.990092 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:43:30.990103 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:43:30.990116 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:43:30.990126 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:43:30.990141 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:43:30.990151 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:43:30.990161 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:43:30.990174 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:43:30.990184 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 19 11:43:30.990195 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 19 11:43:30.990205 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:43:30.990215 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:43:30.990225 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:43:30.990235 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:43:30.990245 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 19 11:43:30.990255 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:43:30.990268 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 19 11:43:30.990278 systemd[1]: Starting systemd-fsck-usr.service... Mar 19 11:43:30.990313 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:43:30.990328 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:43:30.990341 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:43:30.990351 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 19 11:43:30.990361 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:43:30.990375 systemd[1]: Finished systemd-fsck-usr.service. Mar 19 11:43:30.990386 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 11:43:30.990396 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:43:30.990453 systemd-journald[182]: Collecting audit messages is disabled. Mar 19 11:43:30.990492 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 19 11:43:30.990507 kernel: Bridge firewalling registered Mar 19 11:43:30.990523 systemd-journald[182]: Journal started Mar 19 11:43:30.990551 systemd-journald[182]: Runtime Journal (/run/log/journal/c4a1cdcb5db94c38bbb73e92b2c431b6) is 4.9M, max 39.3M, 34.4M free. Mar 19 11:43:30.956530 systemd-modules-load[183]: Inserted module 'overlay' Mar 19 11:43:31.008985 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:43:30.987085 systemd-modules-load[183]: Inserted module 'br_netfilter' Mar 19 11:43:31.013083 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:43:31.013873 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:43:31.020640 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:43:31.029588 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:43:31.031895 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:43:31.042899 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:43:31.046397 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:43:31.057661 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:43:31.063712 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 19 11:43:31.065075 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:43:31.066475 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:43:31.074518 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:43:31.088633 dracut-cmdline[217]: dracut-dracut-053 Mar 19 11:43:31.093940 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=08c32ef14ad6302a92b1d281c48443f5b56d59f0d37d38df628e5b6f012967bc Mar 19 11:43:31.145736 systemd-resolved[222]: Positive Trust Anchors: Mar 19 11:43:31.145759 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:43:31.145815 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:43:31.149912 systemd-resolved[222]: Defaulting to hostname 'linux'. Mar 19 11:43:31.152015 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:43:31.152602 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:43:31.227353 kernel: SCSI subsystem initialized Mar 19 11:43:31.239410 kernel: Loading iSCSI transport class v2.0-870. Mar 19 11:43:31.252352 kernel: iscsi: registered transport (tcp) Mar 19 11:43:31.277382 kernel: iscsi: registered transport (qla4xxx) Mar 19 11:43:31.277504 kernel: QLogic iSCSI HBA Driver Mar 19 11:43:31.340574 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 19 11:43:31.346613 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 19 11:43:31.386358 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 19 11:43:31.386431 kernel: device-mapper: uevent: version 1.0.3 Mar 19 11:43:31.386457 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 19 11:43:31.432360 kernel: raid6: avx2x4 gen() 16423 MB/s Mar 19 11:43:31.449367 kernel: raid6: avx2x2 gen() 16832 MB/s Mar 19 11:43:31.466453 kernel: raid6: avx2x1 gen() 12916 MB/s Mar 19 11:43:31.466530 kernel: raid6: using algorithm avx2x2 gen() 16832 MB/s Mar 19 11:43:31.484674 kernel: raid6: .... xor() 18897 MB/s, rmw enabled Mar 19 11:43:31.484765 kernel: raid6: using avx2x2 recovery algorithm Mar 19 11:43:31.509332 kernel: xor: automatically using best checksumming function avx Mar 19 11:43:31.674336 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 19 11:43:31.688727 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:43:31.694675 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:43:31.723137 systemd-udevd[405]: Using default interface naming scheme 'v255'. Mar 19 11:43:31.729983 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:43:31.739792 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 19 11:43:31.757385 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Mar 19 11:43:31.795311 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:43:31.801636 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:43:31.874706 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:43:31.880520 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 19 11:43:31.907947 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 19 11:43:31.910278 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:43:31.911395 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:43:31.913438 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:43:31.921050 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 19 11:43:31.951116 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:43:31.976444 kernel: scsi host0: Virtio SCSI HBA Mar 19 11:43:31.976544 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Mar 19 11:43:32.036125 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 19 11:43:32.036352 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 19 11:43:32.036388 kernel: GPT:9289727 != 125829119 Mar 19 11:43:32.036415 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 19 11:43:32.036437 kernel: GPT:9289727 != 125829119 Mar 19 11:43:32.036459 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 19 11:43:32.036478 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 11:43:32.036497 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Mar 19 11:43:32.060563 kernel: cryptd: max_cpu_qlen set to 1000 Mar 19 11:43:32.060593 kernel: virtio_blk virtio5: [vdb] 932 512-byte logical blocks (477 kB/466 KiB) Mar 19 11:43:32.081033 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:43:32.081233 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:43:32.082254 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:43:32.087309 kernel: libata version 3.00 loaded. Mar 19 11:43:32.089386 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:43:32.089649 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:43:32.098712 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:43:32.101880 kernel: AVX2 version of gcm_enc/dec engaged. Mar 19 11:43:32.101949 kernel: AES CTR mode by8 optimization enabled Mar 19 11:43:32.102734 kernel: ata_piix 0000:00:01.1: version 2.13 Mar 19 11:43:32.109310 kernel: scsi host1: ata_piix Mar 19 11:43:32.109602 kernel: scsi host2: ata_piix Mar 19 11:43:32.109786 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Mar 19 11:43:32.109807 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Mar 19 11:43:32.106001 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:43:32.113957 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:43:32.120215 kernel: ACPI: bus type USB registered Mar 19 11:43:32.120240 kernel: usbcore: registered new interface driver usbfs Mar 19 11:43:32.120262 kernel: usbcore: registered new interface driver hub Mar 19 11:43:32.120275 kernel: usbcore: registered new device driver usb Mar 19 11:43:32.158074 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (452) Mar 19 11:43:32.161436 kernel: BTRFS: device fsid 8d57424d-5abc-4888-810f-658d040a58e4 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (449) Mar 19 11:43:32.191894 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 19 11:43:32.215946 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:43:32.227055 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 19 11:43:32.241902 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 19 11:43:32.242496 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 19 11:43:32.261017 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 19 11:43:32.271759 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 19 11:43:32.276436 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:43:32.296815 disk-uuid[543]: Primary Header is updated. Mar 19 11:43:32.296815 disk-uuid[543]: Secondary Entries is updated. Mar 19 11:43:32.296815 disk-uuid[543]: Secondary Header is updated. Mar 19 11:43:32.312434 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:43:32.319497 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Mar 19 11:43:32.319923 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Mar 19 11:43:32.320270 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Mar 19 11:43:32.320446 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Mar 19 11:43:32.320588 kernel: hub 1-0:1.0: USB hub found Mar 19 11:43:32.320824 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 11:43:32.320846 kernel: hub 1-0:1.0: 2 ports detected Mar 19 11:43:32.329334 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 11:43:33.329352 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 11:43:33.329419 disk-uuid[546]: The operation has completed successfully. Mar 19 11:43:33.375336 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 19 11:43:33.375459 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 19 11:43:33.418547 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 19 11:43:33.423229 sh[563]: Success Mar 19 11:43:33.438343 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 19 11:43:33.498777 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 19 11:43:33.509467 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 19 11:43:33.510825 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 19 11:43:33.539395 kernel: BTRFS info (device dm-0): first mount of filesystem 8d57424d-5abc-4888-810f-658d040a58e4 Mar 19 11:43:33.539472 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 19 11:43:33.540479 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 19 11:43:33.541502 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 19 11:43:33.542653 kernel: BTRFS info (device dm-0): using free space tree Mar 19 11:43:33.550973 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 19 11:43:33.552274 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 19 11:43:33.560570 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 19 11:43:33.563021 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 19 11:43:33.580569 kernel: BTRFS info (device vda6): first mount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 11:43:33.580666 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 19 11:43:33.581428 kernel: BTRFS info (device vda6): using free space tree Mar 19 11:43:33.587415 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 11:43:33.603320 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 19 11:43:33.605103 kernel: BTRFS info (device vda6): last unmount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 11:43:33.611749 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 19 11:43:33.618512 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 19 11:43:33.736592 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:43:33.745542 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:43:33.757884 ignition[656]: Ignition 2.20.0 Mar 19 11:43:33.757901 ignition[656]: Stage: fetch-offline Mar 19 11:43:33.757974 ignition[656]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:43:33.757992 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 19 11:43:33.758149 ignition[656]: parsed url from cmdline: "" Mar 19 11:43:33.758155 ignition[656]: no config URL provided Mar 19 11:43:33.758164 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 11:43:33.760124 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:43:33.758178 ignition[656]: no config at "/usr/lib/ignition/user.ign" Mar 19 11:43:33.758187 ignition[656]: failed to fetch config: resource requires networking Mar 19 11:43:33.758772 ignition[656]: Ignition finished successfully Mar 19 11:43:33.796516 systemd-networkd[752]: lo: Link UP Mar 19 11:43:33.796530 systemd-networkd[752]: lo: Gained carrier Mar 19 11:43:33.801368 systemd-networkd[752]: Enumeration completed Mar 19 11:43:33.802104 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:43:33.802260 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Mar 19 11:43:33.802267 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Mar 19 11:43:33.803031 systemd[1]: Reached target network.target - Network. Mar 19 11:43:33.804896 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:43:33.804903 systemd-networkd[752]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:43:33.805879 systemd-networkd[752]: eth0: Link UP Mar 19 11:43:33.805884 systemd-networkd[752]: eth0: Gained carrier Mar 19 11:43:33.805896 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Mar 19 11:43:33.808717 systemd-networkd[752]: eth1: Link UP Mar 19 11:43:33.808723 systemd-networkd[752]: eth1: Gained carrier Mar 19 11:43:33.808740 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:43:33.809568 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 19 11:43:33.824413 systemd-networkd[752]: eth0: DHCPv4 address 146.190.145.41/20, gateway 146.190.144.1 acquired from 169.254.169.253 Mar 19 11:43:33.827421 systemd-networkd[752]: eth1: DHCPv4 address 10.124.0.26/20 acquired from 169.254.169.253 Mar 19 11:43:33.849850 ignition[756]: Ignition 2.20.0 Mar 19 11:43:33.849900 ignition[756]: Stage: fetch Mar 19 11:43:33.850139 ignition[756]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:43:33.850179 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 19 11:43:33.850344 ignition[756]: parsed url from cmdline: "" Mar 19 11:43:33.850350 ignition[756]: no config URL provided Mar 19 11:43:33.850357 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 11:43:33.850368 ignition[756]: no config at "/usr/lib/ignition/user.ign" Mar 19 11:43:33.850398 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Mar 19 11:43:33.875435 ignition[756]: GET result: OK Mar 19 11:43:33.875672 ignition[756]: parsing config with SHA512: ea0f56696cdd3eca430dd864eb3e219ac4d1f41292ee68db20416c3dde68ebfc90ce12c83b94ff951eb6b0e70bca31e7f7fa1be0acf63e55c281d93ceaff8530 Mar 19 11:43:33.883075 unknown[756]: fetched base config from "system" Mar 19 11:43:33.883203 unknown[756]: fetched base config from "system" Mar 19 11:43:33.883709 ignition[756]: fetch: fetch complete Mar 19 11:43:33.883215 unknown[756]: fetched user config from "digitalocean" Mar 19 11:43:33.883719 ignition[756]: fetch: fetch passed Mar 19 11:43:33.883796 ignition[756]: Ignition finished successfully Mar 19 11:43:33.886683 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 19 11:43:33.890603 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 19 11:43:33.914848 ignition[764]: Ignition 2.20.0 Mar 19 11:43:33.914863 ignition[764]: Stage: kargs Mar 19 11:43:33.915060 ignition[764]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:43:33.915071 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 19 11:43:33.916253 ignition[764]: kargs: kargs passed Mar 19 11:43:33.916348 ignition[764]: Ignition finished successfully Mar 19 11:43:33.919436 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 19 11:43:33.924554 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 19 11:43:33.945458 ignition[770]: Ignition 2.20.0 Mar 19 11:43:33.945472 ignition[770]: Stage: disks Mar 19 11:43:33.945745 ignition[770]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:43:33.948500 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 19 11:43:33.945760 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 19 11:43:33.951485 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 19 11:43:33.946794 ignition[770]: disks: disks passed Mar 19 11:43:33.952341 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 19 11:43:33.946859 ignition[770]: Ignition finished successfully Mar 19 11:43:33.953196 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:43:33.954139 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:43:33.954904 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:43:33.961580 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 19 11:43:33.984851 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 19 11:43:33.988957 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 19 11:43:34.546478 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 19 11:43:34.663419 kernel: EXT4-fs (vda9): mounted filesystem 303a73dd-e104-408b-9302-bf91b04ba1ca r/w with ordered data mode. Quota mode: none. Mar 19 11:43:34.664699 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 19 11:43:34.665877 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 19 11:43:34.675462 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:43:34.678419 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 19 11:43:34.681559 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Mar 19 11:43:34.687907 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 19 11:43:34.689674 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (786) Mar 19 11:43:34.692485 kernel: BTRFS info (device vda6): first mount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 11:43:34.692548 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 19 11:43:34.692569 kernel: BTRFS info (device vda6): using free space tree Mar 19 11:43:34.694159 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 19 11:43:34.695303 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:43:34.698583 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 19 11:43:34.710529 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 19 11:43:34.723346 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 11:43:34.734361 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:43:34.784455 coreos-metadata[788]: Mar 19 11:43:34.784 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 19 11:43:34.790414 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Mar 19 11:43:34.796950 coreos-metadata[789]: Mar 19 11:43:34.796 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 19 11:43:34.798848 coreos-metadata[788]: Mar 19 11:43:34.798 INFO Fetch successful Mar 19 11:43:34.801972 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Mar 19 11:43:34.807187 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Mar 19 11:43:34.807315 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Mar 19 11:43:34.811433 coreos-metadata[789]: Mar 19 11:43:34.810 INFO Fetch successful Mar 19 11:43:34.813326 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Mar 19 11:43:34.815363 coreos-metadata[789]: Mar 19 11:43:34.815 INFO wrote hostname ci-4230.1.0-4-956bb2dfea to /sysroot/etc/hostname Mar 19 11:43:34.817360 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 19 11:43:34.822567 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Mar 19 11:43:34.948208 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 19 11:43:34.954497 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 19 11:43:34.958527 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 19 11:43:34.971396 kernel: BTRFS info (device vda6): last unmount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 11:43:34.994485 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 19 11:43:35.002974 ignition[907]: INFO : Ignition 2.20.0 Mar 19 11:43:35.004403 ignition[907]: INFO : Stage: mount Mar 19 11:43:35.004403 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:43:35.004403 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 19 11:43:35.007062 ignition[907]: INFO : mount: mount passed Mar 19 11:43:35.007062 ignition[907]: INFO : Ignition finished successfully Mar 19 11:43:35.009816 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 19 11:43:35.016524 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 19 11:43:35.261540 systemd-networkd[752]: eth0: Gained IPv6LL Mar 19 11:43:35.517471 systemd-networkd[752]: eth1: Gained IPv6LL Mar 19 11:43:35.538602 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 19 11:43:35.542612 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:43:35.621320 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (918) Mar 19 11:43:35.623690 kernel: BTRFS info (device vda6): first mount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 11:43:35.623772 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 19 11:43:35.625346 kernel: BTRFS info (device vda6): using free space tree Mar 19 11:43:35.630342 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 11:43:35.633201 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:43:35.662706 ignition[934]: INFO : Ignition 2.20.0 Mar 19 11:43:35.663408 ignition[934]: INFO : Stage: files Mar 19 11:43:35.664203 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:43:35.665811 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 19 11:43:35.665811 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Mar 19 11:43:35.668035 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 19 11:43:35.668035 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 19 11:43:35.672774 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 19 11:43:35.673662 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 19 11:43:35.674431 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 19 11:43:35.674327 unknown[934]: wrote ssh authorized keys file for user: core Mar 19 11:43:35.676583 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Mar 19 11:43:35.677459 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Mar 19 11:43:35.677459 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:43:35.677459 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:43:35.677459 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 19 11:43:35.677459 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 19 11:43:35.677459 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 19 11:43:35.677459 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Mar 19 11:43:36.038579 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Mar 19 11:43:36.332734 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 19 11:43:36.333747 ignition[934]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:43:36.333747 ignition[934]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:43:36.333747 ignition[934]: INFO : files: files passed Mar 19 11:43:36.333747 ignition[934]: INFO : Ignition finished successfully Mar 19 11:43:36.334768 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 19 11:43:36.341616 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 19 11:43:36.343544 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 19 11:43:36.351903 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 19 11:43:36.352834 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 19 11:43:36.364325 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:43:36.364325 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:43:36.367971 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:43:36.370553 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:43:36.371396 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 19 11:43:36.380623 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 19 11:43:36.413129 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 19 11:43:36.413320 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 19 11:43:36.415356 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 19 11:43:36.416090 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 19 11:43:36.417147 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 19 11:43:36.422594 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 19 11:43:36.444847 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:43:36.451790 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 19 11:43:36.480907 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:43:36.481721 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:43:36.482708 systemd[1]: Stopped target timers.target - Timer Units. Mar 19 11:43:36.483646 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 19 11:43:36.483961 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:43:36.485862 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 19 11:43:36.486568 systemd[1]: Stopped target basic.target - Basic System. Mar 19 11:43:36.487386 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 19 11:43:36.488486 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:43:36.489515 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 19 11:43:36.490693 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 19 11:43:36.491780 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:43:36.492692 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 19 11:43:36.493578 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 19 11:43:36.494620 systemd[1]: Stopped target swap.target - Swaps. Mar 19 11:43:36.495425 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 19 11:43:36.495887 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:43:36.497335 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:43:36.498118 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:43:36.498956 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 19 11:43:36.499129 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:43:36.500099 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 19 11:43:36.500364 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 19 11:43:36.501627 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 19 11:43:36.501846 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:43:36.504261 systemd[1]: ignition-files.service: Deactivated successfully. Mar 19 11:43:36.504474 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 19 11:43:36.505590 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 19 11:43:36.505780 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 19 11:43:36.515975 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 19 11:43:36.517051 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 19 11:43:36.517850 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:43:36.521316 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 19 11:43:36.522266 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 19 11:43:36.522517 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:43:36.524342 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 19 11:43:36.524988 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:43:36.534796 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 19 11:43:36.534908 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 19 11:43:36.549343 ignition[987]: INFO : Ignition 2.20.0 Mar 19 11:43:36.549343 ignition[987]: INFO : Stage: umount Mar 19 11:43:36.549343 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:43:36.549343 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 19 11:43:36.553699 ignition[987]: INFO : umount: umount passed Mar 19 11:43:36.553699 ignition[987]: INFO : Ignition finished successfully Mar 19 11:43:36.552795 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 19 11:43:36.552992 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 19 11:43:36.555558 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 19 11:43:36.555797 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 19 11:43:36.558238 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 19 11:43:36.558345 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 19 11:43:36.560644 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 19 11:43:36.560786 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 19 11:43:36.561390 systemd[1]: Stopped target network.target - Network. Mar 19 11:43:36.562038 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 19 11:43:36.562145 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:43:36.562963 systemd[1]: Stopped target paths.target - Path Units. Mar 19 11:43:36.564639 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 19 11:43:36.570826 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:43:36.589213 systemd[1]: Stopped target slices.target - Slice Units. Mar 19 11:43:36.589970 systemd[1]: Stopped target sockets.target - Socket Units. Mar 19 11:43:36.592227 systemd[1]: iscsid.socket: Deactivated successfully. Mar 19 11:43:36.592329 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:43:36.593106 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 19 11:43:36.593182 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:43:36.594153 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 19 11:43:36.594247 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 19 11:43:36.595162 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 19 11:43:36.595242 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 19 11:43:36.596677 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 19 11:43:36.598428 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 19 11:43:36.605778 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 19 11:43:36.606853 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 19 11:43:36.607015 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 19 11:43:36.609685 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 19 11:43:36.609868 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 19 11:43:36.616693 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 19 11:43:36.617133 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 19 11:43:36.617367 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 19 11:43:36.620187 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 19 11:43:36.622713 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 19 11:43:36.622798 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:43:36.624097 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 19 11:43:36.624208 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 19 11:43:36.629510 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 19 11:43:36.630007 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 19 11:43:36.630104 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:43:36.630668 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 11:43:36.630754 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:43:36.631819 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 19 11:43:36.631908 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 19 11:43:36.633693 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 19 11:43:36.633773 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:43:36.634591 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:43:36.637612 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 19 11:43:36.637705 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:43:36.649945 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 19 11:43:36.650163 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:43:36.654690 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 19 11:43:36.654784 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 19 11:43:36.655787 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 19 11:43:36.655835 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:43:36.657018 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 19 11:43:36.657102 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:43:36.658332 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 19 11:43:36.658396 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 19 11:43:36.659474 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:43:36.659738 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:43:36.665686 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 19 11:43:36.666170 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 19 11:43:36.666255 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:43:36.667337 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:43:36.667447 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:43:36.671233 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 19 11:43:36.671341 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:43:36.671896 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 19 11:43:36.672013 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 19 11:43:36.691207 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 19 11:43:36.691377 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 19 11:43:36.692471 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 19 11:43:36.706368 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 19 11:43:36.719872 systemd[1]: Switching root. Mar 19 11:43:36.758159 systemd-journald[182]: Journal stopped Mar 19 11:43:38.221754 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Mar 19 11:43:38.221876 kernel: SELinux: policy capability network_peer_controls=1 Mar 19 11:43:38.221913 kernel: SELinux: policy capability open_perms=1 Mar 19 11:43:38.221934 kernel: SELinux: policy capability extended_socket_class=1 Mar 19 11:43:38.221961 kernel: SELinux: policy capability always_check_network=0 Mar 19 11:43:38.221987 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 19 11:43:38.222012 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 19 11:43:38.222032 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 19 11:43:38.222051 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 19 11:43:38.222072 kernel: audit: type=1403 audit(1742384616.911:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 19 11:43:38.222095 systemd[1]: Successfully loaded SELinux policy in 48.601ms. Mar 19 11:43:38.222130 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 18.497ms. Mar 19 11:43:38.222160 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:43:38.222190 systemd[1]: Detected virtualization kvm. Mar 19 11:43:38.222213 systemd[1]: Detected architecture x86-64. Mar 19 11:43:38.222234 systemd[1]: Detected first boot. Mar 19 11:43:38.222256 systemd[1]: Hostname set to . Mar 19 11:43:38.222279 systemd[1]: Initializing machine ID from VM UUID. Mar 19 11:43:38.222929 zram_generator::config[1032]: No configuration found. Mar 19 11:43:38.222963 kernel: Guest personality initialized and is inactive Mar 19 11:43:38.223000 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 19 11:43:38.223019 kernel: Initialized host personality Mar 19 11:43:38.223039 kernel: NET: Registered PF_VSOCK protocol family Mar 19 11:43:38.223060 systemd[1]: Populated /etc with preset unit settings. Mar 19 11:43:38.223087 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 19 11:43:38.223109 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 19 11:43:38.223131 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 19 11:43:38.223152 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 19 11:43:38.223173 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 19 11:43:38.223202 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 19 11:43:38.223226 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 19 11:43:38.223249 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 19 11:43:38.223272 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 19 11:43:38.225329 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 19 11:43:38.225365 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 19 11:43:38.225385 systemd[1]: Created slice user.slice - User and Session Slice. Mar 19 11:43:38.225418 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:43:38.225449 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:43:38.225472 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 19 11:43:38.225494 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 19 11:43:38.225517 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 19 11:43:38.225540 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:43:38.225562 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 19 11:43:38.225588 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:43:38.225611 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 19 11:43:38.225635 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 19 11:43:38.225657 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 19 11:43:38.225679 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 19 11:43:38.225701 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:43:38.225722 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:43:38.225744 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:43:38.225763 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:43:38.225782 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 19 11:43:38.225808 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 19 11:43:38.225831 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 19 11:43:38.225852 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:43:38.225874 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:43:38.225896 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:43:38.225918 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 19 11:43:38.225940 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 19 11:43:38.225963 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 19 11:43:38.225985 systemd[1]: Mounting media.mount - External Media Directory... Mar 19 11:43:38.226012 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:43:38.226033 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 19 11:43:38.226054 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 19 11:43:38.226076 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 19 11:43:38.226099 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 19 11:43:38.226121 systemd[1]: Reached target machines.target - Containers. Mar 19 11:43:38.226143 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 19 11:43:38.226166 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:43:38.226192 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:43:38.226213 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 19 11:43:38.226235 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:43:38.226256 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:43:38.226277 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:43:38.226329 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 19 11:43:38.226350 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:43:38.226374 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 19 11:43:38.226400 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 19 11:43:38.226422 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 19 11:43:38.226443 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 19 11:43:38.226464 systemd[1]: Stopped systemd-fsck-usr.service. Mar 19 11:43:38.226487 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:43:38.226509 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:43:38.226531 kernel: fuse: init (API version 7.39) Mar 19 11:43:38.226553 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:43:38.226575 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 19 11:43:38.226602 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 19 11:43:38.226624 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 19 11:43:38.226646 kernel: loop: module loaded Mar 19 11:43:38.226665 kernel: ACPI: bus type drm_connector registered Mar 19 11:43:38.226686 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:43:38.226717 systemd[1]: verity-setup.service: Deactivated successfully. Mar 19 11:43:38.226738 systemd[1]: Stopped verity-setup.service. Mar 19 11:43:38.226762 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:43:38.226781 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 19 11:43:38.226801 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 19 11:43:38.226824 systemd[1]: Mounted media.mount - External Media Directory. Mar 19 11:43:38.226844 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 19 11:43:38.226865 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 19 11:43:38.226886 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 19 11:43:38.226907 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:43:38.226927 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 19 11:43:38.226975 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 19 11:43:38.226998 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:43:38.227020 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:43:38.227047 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:43:38.227068 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:43:38.227092 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:43:38.227114 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:43:38.227136 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 19 11:43:38.227159 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 19 11:43:38.227179 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:43:38.227200 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:43:38.227223 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:43:38.227248 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 19 11:43:38.227270 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 19 11:43:38.230423 systemd-journald[1106]: Collecting audit messages is disabled. Mar 19 11:43:38.230495 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 19 11:43:38.230523 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 19 11:43:38.230546 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 19 11:43:38.230570 systemd-journald[1106]: Journal started Mar 19 11:43:38.230618 systemd-journald[1106]: Runtime Journal (/run/log/journal/c4a1cdcb5db94c38bbb73e92b2c431b6) is 4.9M, max 39.3M, 34.4M free. Mar 19 11:43:38.237361 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 19 11:43:38.237461 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 19 11:43:37.798799 systemd[1]: Queued start job for default target multi-user.target. Mar 19 11:43:37.813262 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 19 11:43:37.813738 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 19 11:43:38.242327 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:43:38.248912 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 19 11:43:38.259330 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 19 11:43:38.269604 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 19 11:43:38.269723 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:43:38.280329 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 19 11:43:38.284602 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:43:38.300464 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 19 11:43:38.300565 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:43:38.312322 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:43:38.330907 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 19 11:43:38.331020 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:43:38.335108 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 19 11:43:38.336622 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:43:38.337671 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 19 11:43:38.338657 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 19 11:43:38.339905 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 19 11:43:38.375388 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 19 11:43:38.390929 kernel: loop0: detected capacity change from 0 to 147912 Mar 19 11:43:38.389103 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 19 11:43:38.400615 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 19 11:43:38.404427 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 19 11:43:38.407972 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 19 11:43:38.422662 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 19 11:43:38.444793 systemd-journald[1106]: Time spent on flushing to /var/log/journal/c4a1cdcb5db94c38bbb73e92b2c431b6 is 70.398ms for 991 entries. Mar 19 11:43:38.444793 systemd-journald[1106]: System Journal (/var/log/journal/c4a1cdcb5db94c38bbb73e92b2c431b6) is 8M, max 195.6M, 187.6M free. Mar 19 11:43:38.525227 systemd-journald[1106]: Received client request to flush runtime journal. Mar 19 11:43:38.525364 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 19 11:43:38.525395 kernel: loop1: detected capacity change from 0 to 8 Mar 19 11:43:38.453756 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:43:38.494135 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 19 11:43:38.504511 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:43:38.516953 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 19 11:43:38.528723 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 19 11:43:38.530614 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 19 11:43:38.541538 kernel: loop2: detected capacity change from 0 to 218376 Mar 19 11:43:38.593319 kernel: loop3: detected capacity change from 0 to 138176 Mar 19 11:43:38.601970 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Mar 19 11:43:38.602000 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Mar 19 11:43:38.618807 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:43:38.676335 kernel: loop4: detected capacity change from 0 to 147912 Mar 19 11:43:38.721497 kernel: loop5: detected capacity change from 0 to 8 Mar 19 11:43:38.724357 kernel: loop6: detected capacity change from 0 to 218376 Mar 19 11:43:38.739683 kernel: loop7: detected capacity change from 0 to 138176 Mar 19 11:43:38.788779 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Mar 19 11:43:38.793717 (sd-merge)[1185]: Merged extensions into '/usr'. Mar 19 11:43:38.804994 systemd[1]: Reload requested from client PID 1139 ('systemd-sysext') (unit systemd-sysext.service)... Mar 19 11:43:38.805186 systemd[1]: Reloading... Mar 19 11:43:38.997002 zram_generator::config[1212]: No configuration found. Mar 19 11:43:39.268326 ldconfig[1132]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 19 11:43:39.310448 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:43:39.399911 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 19 11:43:39.400179 systemd[1]: Reloading finished in 593 ms. Mar 19 11:43:39.415858 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 19 11:43:39.417181 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 19 11:43:39.438641 systemd[1]: Starting ensure-sysext.service... Mar 19 11:43:39.445610 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:43:39.460588 systemd[1]: Reload requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Mar 19 11:43:39.460763 systemd[1]: Reloading... Mar 19 11:43:39.526006 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 19 11:43:39.526381 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 19 11:43:39.527240 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 19 11:43:39.527547 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 19 11:43:39.527623 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 19 11:43:39.532860 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:43:39.532876 systemd-tmpfiles[1257]: Skipping /boot Mar 19 11:43:39.561337 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:43:39.561391 systemd-tmpfiles[1257]: Skipping /boot Mar 19 11:43:39.622344 zram_generator::config[1285]: No configuration found. Mar 19 11:43:39.788347 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:43:39.866061 systemd[1]: Reloading finished in 404 ms. Mar 19 11:43:39.882130 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 19 11:43:39.894070 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:43:39.906782 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:43:39.914992 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 19 11:43:39.920647 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 19 11:43:39.934728 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:43:39.940751 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:43:39.952734 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 19 11:43:39.969800 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:43:39.970107 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:43:39.981729 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:43:39.991779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:43:40.003755 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:43:40.006507 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:43:40.006734 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:43:40.016836 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 19 11:43:40.018442 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:43:40.022347 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 19 11:43:40.025674 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:43:40.026006 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:43:40.032182 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:43:40.032536 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:43:40.057380 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:43:40.057700 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:43:40.061190 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 19 11:43:40.064138 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Mar 19 11:43:40.070009 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 19 11:43:40.080989 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:43:40.081421 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:43:40.090710 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:43:40.094707 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:43:40.103813 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:43:40.115760 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:43:40.116508 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:43:40.116714 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:43:40.122770 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 19 11:43:40.123368 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 19 11:43:40.123579 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:43:40.134131 systemd[1]: Finished ensure-sysext.service. Mar 19 11:43:40.135861 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:43:40.137452 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:43:40.138364 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 19 11:43:40.151729 augenrules[1376]: No rules Mar 19 11:43:40.165607 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 19 11:43:40.166807 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:43:40.167133 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:43:40.168173 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:43:40.168792 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:43:40.169584 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:43:40.170925 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:43:40.171196 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:43:40.173060 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:43:40.173350 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:43:40.192612 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:43:40.193181 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:43:40.193299 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:43:40.217813 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 19 11:43:40.304454 systemd-resolved[1333]: Positive Trust Anchors: Mar 19 11:43:40.305558 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:43:40.305880 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:43:40.312039 systemd-resolved[1333]: Using system hostname 'ci-4230.1.0-4-956bb2dfea'. Mar 19 11:43:40.314157 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:43:40.316571 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:43:40.399606 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 19 11:43:40.400689 systemd[1]: Reached target time-set.target - System Time Set. Mar 19 11:43:40.407103 systemd-networkd[1393]: lo: Link UP Mar 19 11:43:40.407115 systemd-networkd[1393]: lo: Gained carrier Mar 19 11:43:40.411458 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Mar 19 11:43:40.416845 systemd-networkd[1393]: Enumeration completed Mar 19 11:43:40.419597 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Mar 19 11:43:40.420181 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:43:40.420435 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:43:40.421703 systemd-timesyncd[1382]: No network connectivity, watching for changes. Mar 19 11:43:40.424239 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:43:40.427043 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:43:40.431700 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:43:40.432222 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:43:40.432295 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:43:40.432339 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 19 11:43:40.432361 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:43:40.432569 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:43:40.435159 systemd[1]: Reached target network.target - Network. Mar 19 11:43:40.439048 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 19 11:43:40.444588 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 19 11:43:40.465995 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 19 11:43:40.470988 kernel: ISO 9660 Extensions: RRIP_1991A Mar 19 11:43:40.473095 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Mar 19 11:43:40.480163 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:43:40.481423 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:43:40.484681 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:43:40.484922 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:43:40.489396 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:43:40.492467 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1404) Mar 19 11:43:40.506262 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:43:40.507384 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:43:40.508229 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:43:40.517803 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 19 11:43:40.546223 systemd-networkd[1393]: eth1: Configuring with /run/systemd/network/10-5a:72:b9:11:81:b2.network. Mar 19 11:43:40.548167 systemd-networkd[1393]: eth1: Link UP Mar 19 11:43:40.548178 systemd-networkd[1393]: eth1: Gained carrier Mar 19 11:43:40.557061 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Mar 19 11:43:40.609204 systemd-networkd[1393]: eth0: Configuring with /run/systemd/network/10-6a:68:dc:4d:3e:e9.network. Mar 19 11:43:40.612467 systemd-networkd[1393]: eth0: Link UP Mar 19 11:43:40.612477 systemd-networkd[1393]: eth0: Gained carrier Mar 19 11:43:40.619648 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 19 11:43:40.626732 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 19 11:43:40.645232 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Mar 19 11:43:40.651601 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 19 11:43:40.648115 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 19 11:43:40.652354 kernel: ACPI: button: Power Button [PWRF] Mar 19 11:43:40.721836 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:43:40.736548 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 19 11:43:40.750327 kernel: mousedev: PS/2 mouse device common for all mice Mar 19 11:43:40.816319 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Mar 19 11:43:40.823353 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Mar 19 11:43:40.825492 kernel: Console: switching to colour dummy device 80x25 Mar 19 11:43:40.826495 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 19 11:43:40.826565 kernel: [drm] features: -context_init Mar 19 11:43:40.827476 kernel: [drm] number of scanouts: 1 Mar 19 11:43:40.827547 kernel: [drm] number of cap sets: 0 Mar 19 11:43:40.828132 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:43:40.831413 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Mar 19 11:43:40.847217 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 19 11:43:40.847473 kernel: Console: switching to colour frame buffer device 128x48 Mar 19 11:43:40.851330 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 19 11:43:40.888824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:43:40.889177 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:43:40.916695 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:43:40.929837 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:43:40.937527 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:43:40.970193 kernel: EDAC MC: Ver: 3.0.0 Mar 19 11:43:41.001260 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 19 11:43:41.012586 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 19 11:43:41.032275 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:43:41.034308 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:43:41.068402 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 19 11:43:41.069880 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:43:41.070120 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:43:41.070590 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 19 11:43:41.071376 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 19 11:43:41.071929 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 19 11:43:41.072142 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 19 11:43:41.072215 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 19 11:43:41.072314 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 19 11:43:41.072345 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:43:41.072404 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:43:41.075769 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 19 11:43:41.078061 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 19 11:43:41.083055 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 19 11:43:41.084779 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 19 11:43:41.086377 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 19 11:43:41.096984 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 19 11:43:41.100062 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 19 11:43:41.108710 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 19 11:43:41.112009 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 19 11:43:41.114349 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:43:41.116222 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:43:41.116966 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:43:41.117008 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:43:41.122493 systemd[1]: Starting containerd.service - containerd container runtime... Mar 19 11:43:41.131569 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 19 11:43:41.133457 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:43:41.144684 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 19 11:43:41.148249 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 19 11:43:41.153225 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 19 11:43:41.154789 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 19 11:43:41.166172 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 19 11:43:41.173623 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 19 11:43:41.176270 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 19 11:43:41.187695 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 19 11:43:41.192160 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 19 11:43:41.193897 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 19 11:43:41.203508 systemd[1]: Starting update-engine.service - Update Engine... Mar 19 11:43:41.208876 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 19 11:43:41.214821 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 19 11:43:41.246320 jq[1461]: false Mar 19 11:43:41.254881 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 19 11:43:41.255095 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 19 11:43:41.264810 update_engine[1469]: I20250319 11:43:41.259114 1469 main.cc:92] Flatcar Update Engine starting Mar 19 11:43:41.260712 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 19 11:43:41.260941 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 19 11:43:41.279329 jq[1470]: true Mar 19 11:43:41.301535 (ntainerd)[1479]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 19 11:43:41.309409 dbus-daemon[1460]: [system] SELinux support is enabled Mar 19 11:43:41.310066 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 19 11:43:41.315972 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 19 11:43:41.316023 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 19 11:43:41.318099 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 19 11:43:41.318243 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Mar 19 11:43:41.318268 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 19 11:43:41.325551 jq[1482]: true Mar 19 11:43:41.337227 systemd[1]: Started update-engine.service - Update Engine. Mar 19 11:43:41.339570 update_engine[1469]: I20250319 11:43:41.337580 1469 update_check_scheduler.cc:74] Next update check in 10m34s Mar 19 11:43:41.351750 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 19 11:43:41.358328 extend-filesystems[1462]: Found loop4 Mar 19 11:43:41.362753 extend-filesystems[1462]: Found loop5 Mar 19 11:43:41.362753 extend-filesystems[1462]: Found loop6 Mar 19 11:43:41.362753 extend-filesystems[1462]: Found loop7 Mar 19 11:43:41.362753 extend-filesystems[1462]: Found vda Mar 19 11:43:41.362753 extend-filesystems[1462]: Found vda1 Mar 19 11:43:41.362753 extend-filesystems[1462]: Found vda2 Mar 19 11:43:41.362753 extend-filesystems[1462]: Found vda3 Mar 19 11:43:41.362753 extend-filesystems[1462]: Found usr Mar 19 11:43:41.362753 extend-filesystems[1462]: Found vda4 Mar 19 11:43:41.362753 extend-filesystems[1462]: Found vda6 Mar 19 11:43:41.362753 extend-filesystems[1462]: Found vda7 Mar 19 11:43:41.362753 extend-filesystems[1462]: Found vda9 Mar 19 11:43:41.362753 extend-filesystems[1462]: Checking size of /dev/vda9 Mar 19 11:43:41.378102 systemd[1]: motdgen.service: Deactivated successfully. Mar 19 11:43:41.378465 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 19 11:43:41.395397 coreos-metadata[1459]: Mar 19 11:43:41.392 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 19 11:43:41.414470 coreos-metadata[1459]: Mar 19 11:43:41.410 INFO Fetch successful Mar 19 11:43:41.426009 extend-filesystems[1462]: Resized partition /dev/vda9 Mar 19 11:43:41.436906 systemd-logind[1468]: New seat seat0. Mar 19 11:43:41.442205 systemd-logind[1468]: Watching system buttons on /dev/input/event1 (Power Button) Mar 19 11:43:41.458026 extend-filesystems[1512]: resize2fs 1.47.1 (20-May-2024) Mar 19 11:43:41.442247 systemd-logind[1468]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 19 11:43:41.443381 systemd[1]: Started systemd-logind.service - User Login Management. Mar 19 11:43:41.473978 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Mar 19 11:43:41.529810 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1395) Mar 19 11:43:41.576885 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 19 11:43:41.589791 bash[1514]: Updated "/home/core/.ssh/authorized_keys" Mar 19 11:43:41.597038 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 19 11:43:41.651023 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 19 11:43:41.658621 systemd[1]: Starting sshkeys.service... Mar 19 11:43:41.671252 locksmithd[1492]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 19 11:43:41.717613 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 19 11:43:41.729829 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 19 11:43:41.759532 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 19 11:43:41.775348 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 19 11:43:41.780589 extend-filesystems[1512]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 19 11:43:41.780589 extend-filesystems[1512]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 19 11:43:41.780589 extend-filesystems[1512]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 19 11:43:41.787027 extend-filesystems[1462]: Resized filesystem in /dev/vda9 Mar 19 11:43:41.787027 extend-filesystems[1462]: Found vdb Mar 19 11:43:41.782438 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 19 11:43:41.784024 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 19 11:43:41.800864 coreos-metadata[1531]: Mar 19 11:43:41.800 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 19 11:43:41.812210 coreos-metadata[1531]: Mar 19 11:43:41.811 INFO Fetch successful Mar 19 11:43:41.825952 unknown[1531]: wrote ssh authorized keys file for user: core Mar 19 11:43:41.843693 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 19 11:43:41.859706 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 19 11:43:41.876063 systemd[1]: issuegen.service: Deactivated successfully. Mar 19 11:43:41.877093 update-ssh-keys[1543]: Updated "/home/core/.ssh/authorized_keys" Mar 19 11:43:41.876664 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 19 11:43:41.880363 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 19 11:43:41.885649 systemd[1]: Finished sshkeys.service. Mar 19 11:43:41.898779 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 19 11:43:41.901576 containerd[1479]: time="2025-03-19T11:43:41.901444590Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 19 11:43:41.929154 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 19 11:43:41.942896 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 19 11:43:41.949535 containerd[1479]: time="2025-03-19T11:43:41.949461703Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:41.952329 containerd[1479]: time="2025-03-19T11:43:41.951656957Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:43:41.952329 containerd[1479]: time="2025-03-19T11:43:41.951706505Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 19 11:43:41.952329 containerd[1479]: time="2025-03-19T11:43:41.951735324Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 19 11:43:41.952329 containerd[1479]: time="2025-03-19T11:43:41.951935763Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 19 11:43:41.952329 containerd[1479]: time="2025-03-19T11:43:41.951954251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:41.952329 containerd[1479]: time="2025-03-19T11:43:41.952012121Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:43:41.952329 containerd[1479]: time="2025-03-19T11:43:41.952025043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:41.952329 containerd[1479]: time="2025-03-19T11:43:41.952252880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:43:41.952329 containerd[1479]: time="2025-03-19T11:43:41.952267999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:41.952329 containerd[1479]: time="2025-03-19T11:43:41.952280655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:43:41.952329 containerd[1479]: time="2025-03-19T11:43:41.952318740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:41.952744 containerd[1479]: time="2025-03-19T11:43:41.952429040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:41.952744 containerd[1479]: time="2025-03-19T11:43:41.952678789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:41.954311 containerd[1479]: time="2025-03-19T11:43:41.952842718Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:43:41.954311 containerd[1479]: time="2025-03-19T11:43:41.952864170Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 19 11:43:41.954311 containerd[1479]: time="2025-03-19T11:43:41.952978158Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 19 11:43:41.954311 containerd[1479]: time="2025-03-19T11:43:41.953068179Z" level=info msg="metadata content store policy set" policy=shared Mar 19 11:43:41.954481 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 19 11:43:41.956216 systemd[1]: Reached target getty.target - Login Prompts. Mar 19 11:43:41.961403 containerd[1479]: time="2025-03-19T11:43:41.961238088Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 19 11:43:41.961403 containerd[1479]: time="2025-03-19T11:43:41.961347099Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 19 11:43:41.961403 containerd[1479]: time="2025-03-19T11:43:41.961366044Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 19 11:43:41.961403 containerd[1479]: time="2025-03-19T11:43:41.961383591Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 19 11:43:41.961403 containerd[1479]: time="2025-03-19T11:43:41.961398366Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 19 11:43:41.961735 containerd[1479]: time="2025-03-19T11:43:41.961590806Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 19 11:43:41.961862 containerd[1479]: time="2025-03-19T11:43:41.961827843Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 19 11:43:41.962028 containerd[1479]: time="2025-03-19T11:43:41.961999176Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 19 11:43:41.962078 containerd[1479]: time="2025-03-19T11:43:41.962030963Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 19 11:43:41.962078 containerd[1479]: time="2025-03-19T11:43:41.962053669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 19 11:43:41.962078 containerd[1479]: time="2025-03-19T11:43:41.962073631Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 19 11:43:41.962175 containerd[1479]: time="2025-03-19T11:43:41.962089393Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 19 11:43:41.962175 containerd[1479]: time="2025-03-19T11:43:41.962102326Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 19 11:43:41.962175 containerd[1479]: time="2025-03-19T11:43:41.962117985Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 19 11:43:41.962175 containerd[1479]: time="2025-03-19T11:43:41.962131609Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 19 11:43:41.962175 containerd[1479]: time="2025-03-19T11:43:41.962148330Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 19 11:43:41.962175 containerd[1479]: time="2025-03-19T11:43:41.962160111Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 19 11:43:41.962175 containerd[1479]: time="2025-03-19T11:43:41.962171277Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 19 11:43:41.962370 containerd[1479]: time="2025-03-19T11:43:41.962190985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 19 11:43:41.962370 containerd[1479]: time="2025-03-19T11:43:41.962218409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 19 11:43:41.962370 containerd[1479]: time="2025-03-19T11:43:41.962234292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 19 11:43:41.962370 containerd[1479]: time="2025-03-19T11:43:41.962247645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 19 11:43:41.962370 containerd[1479]: time="2025-03-19T11:43:41.962259344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 19 11:43:41.962370 containerd[1479]: time="2025-03-19T11:43:41.962271593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 19 11:43:41.962370 containerd[1479]: time="2025-03-19T11:43:41.962299671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 19 11:43:41.962370 containerd[1479]: time="2025-03-19T11:43:41.962312961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 19 11:43:41.962370 containerd[1479]: time="2025-03-19T11:43:41.962326363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 19 11:43:41.962370 containerd[1479]: time="2025-03-19T11:43:41.962340498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 19 11:43:41.962370 containerd[1479]: time="2025-03-19T11:43:41.962352193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 19 11:43:41.962370 containerd[1479]: time="2025-03-19T11:43:41.962362761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 19 11:43:41.962370 containerd[1479]: time="2025-03-19T11:43:41.962374936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 19 11:43:41.962785 containerd[1479]: time="2025-03-19T11:43:41.962388689Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 19 11:43:41.962785 containerd[1479]: time="2025-03-19T11:43:41.962410788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 19 11:43:41.962785 containerd[1479]: time="2025-03-19T11:43:41.962423596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 19 11:43:41.962785 containerd[1479]: time="2025-03-19T11:43:41.962434142Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 19 11:43:41.962785 containerd[1479]: time="2025-03-19T11:43:41.962485328Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 19 11:43:41.962785 containerd[1479]: time="2025-03-19T11:43:41.962512809Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 19 11:43:41.962785 containerd[1479]: time="2025-03-19T11:43:41.962529580Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 19 11:43:41.962785 containerd[1479]: time="2025-03-19T11:43:41.962547979Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 19 11:43:41.962785 containerd[1479]: time="2025-03-19T11:43:41.962561758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 19 11:43:41.962785 containerd[1479]: time="2025-03-19T11:43:41.962577327Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 19 11:43:41.962785 containerd[1479]: time="2025-03-19T11:43:41.962590995Z" level=info msg="NRI interface is disabled by configuration." Mar 19 11:43:41.962785 containerd[1479]: time="2025-03-19T11:43:41.962605196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 19 11:43:41.963071 containerd[1479]: time="2025-03-19T11:43:41.962891405Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 19 11:43:41.963071 containerd[1479]: time="2025-03-19T11:43:41.962951569Z" level=info msg="Connect containerd service" Mar 19 11:43:41.963071 containerd[1479]: time="2025-03-19T11:43:41.962997193Z" level=info msg="using legacy CRI server" Mar 19 11:43:41.963071 containerd[1479]: time="2025-03-19T11:43:41.963008459Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 19 11:43:41.963373 containerd[1479]: time="2025-03-19T11:43:41.963189336Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 19 11:43:41.964160 containerd[1479]: time="2025-03-19T11:43:41.964118907Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:43:41.964892 containerd[1479]: time="2025-03-19T11:43:41.964340905Z" level=info msg="Start subscribing containerd event" Mar 19 11:43:41.964892 containerd[1479]: time="2025-03-19T11:43:41.964449135Z" level=info msg="Start recovering state" Mar 19 11:43:41.964892 containerd[1479]: time="2025-03-19T11:43:41.964525314Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 19 11:43:41.964892 containerd[1479]: time="2025-03-19T11:43:41.964548178Z" level=info msg="Start event monitor" Mar 19 11:43:41.964892 containerd[1479]: time="2025-03-19T11:43:41.964577648Z" level=info msg="Start snapshots syncer" Mar 19 11:43:41.964892 containerd[1479]: time="2025-03-19T11:43:41.964587693Z" level=info msg="Start cni network conf syncer for default" Mar 19 11:43:41.964892 containerd[1479]: time="2025-03-19T11:43:41.964601873Z" level=info msg="Start streaming server" Mar 19 11:43:41.964892 containerd[1479]: time="2025-03-19T11:43:41.964616210Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 19 11:43:41.964892 containerd[1479]: time="2025-03-19T11:43:41.964685208Z" level=info msg="containerd successfully booted in 0.066005s" Mar 19 11:43:41.964797 systemd[1]: Started containerd.service - containerd container runtime. Mar 19 11:43:41.981539 systemd-networkd[1393]: eth1: Gained IPv6LL Mar 19 11:43:41.984760 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 19 11:43:41.989818 systemd[1]: Reached target network-online.target - Network is Online. Mar 19 11:43:42.013639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:43:42.018833 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 19 11:43:42.051373 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 19 11:43:42.365546 systemd-networkd[1393]: eth0: Gained IPv6LL Mar 19 11:43:43.157622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:43:43.159072 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 19 11:43:43.166771 systemd[1]: Startup finished in 1.101s (kernel) + 6.193s (initrd) + 6.302s (userspace) = 13.598s. Mar 19 11:43:43.169544 (kubelet)[1577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:43:43.855072 kubelet[1577]: E0319 11:43:43.854939 1577 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:43:43.857884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:43:43.858115 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:43:43.858702 systemd[1]: kubelet.service: Consumed 1.320s CPU time, 252.4M memory peak. Mar 19 11:43:44.764791 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 19 11:43:44.772738 systemd[1]: Started sshd@0-146.190.145.41:22-147.75.109.163:37956.service - OpenSSH per-connection server daemon (147.75.109.163:37956). Mar 19 11:43:44.847395 sshd[1590]: Accepted publickey for core from 147.75.109.163 port 37956 ssh2: RSA SHA256:ZrZa+wczChIMwO53Zaig+5deG2z8DMpvwgZ8y6nZWps Mar 19 11:43:44.849955 sshd-session[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:44.865865 systemd-logind[1468]: New session 1 of user core. Mar 19 11:43:44.867110 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 19 11:43:44.879922 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 19 11:43:44.896616 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 19 11:43:44.902742 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 19 11:43:44.918609 (systemd)[1594]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 19 11:43:44.922365 systemd-logind[1468]: New session c1 of user core. Mar 19 11:43:45.089132 systemd[1594]: Queued start job for default target default.target. Mar 19 11:43:45.099924 systemd[1594]: Created slice app.slice - User Application Slice. Mar 19 11:43:45.099964 systemd[1594]: Reached target paths.target - Paths. Mar 19 11:43:45.100017 systemd[1594]: Reached target timers.target - Timers. Mar 19 11:43:45.101797 systemd[1594]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 19 11:43:45.116026 systemd[1594]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 19 11:43:45.116154 systemd[1594]: Reached target sockets.target - Sockets. Mar 19 11:43:45.116212 systemd[1594]: Reached target basic.target - Basic System. Mar 19 11:43:45.116264 systemd[1594]: Reached target default.target - Main User Target. Mar 19 11:43:45.116326 systemd[1594]: Startup finished in 183ms. Mar 19 11:43:45.116511 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 19 11:43:45.128955 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 19 11:43:45.201406 systemd[1]: Started sshd@1-146.190.145.41:22-147.75.109.163:37958.service - OpenSSH per-connection server daemon (147.75.109.163:37958). Mar 19 11:43:45.255456 sshd[1605]: Accepted publickey for core from 147.75.109.163 port 37958 ssh2: RSA SHA256:ZrZa+wczChIMwO53Zaig+5deG2z8DMpvwgZ8y6nZWps Mar 19 11:43:45.257492 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:45.264751 systemd-logind[1468]: New session 2 of user core. Mar 19 11:43:45.272822 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 19 11:43:45.338344 sshd[1607]: Connection closed by 147.75.109.163 port 37958 Mar 19 11:43:45.338207 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Mar 19 11:43:45.351240 systemd[1]: sshd@1-146.190.145.41:22-147.75.109.163:37958.service: Deactivated successfully. Mar 19 11:43:45.353830 systemd[1]: session-2.scope: Deactivated successfully. Mar 19 11:43:45.356696 systemd-logind[1468]: Session 2 logged out. Waiting for processes to exit. Mar 19 11:43:45.363839 systemd[1]: Started sshd@2-146.190.145.41:22-147.75.109.163:37974.service - OpenSSH per-connection server daemon (147.75.109.163:37974). Mar 19 11:43:45.366477 systemd-logind[1468]: Removed session 2. Mar 19 11:43:45.415876 sshd[1612]: Accepted publickey for core from 147.75.109.163 port 37974 ssh2: RSA SHA256:ZrZa+wczChIMwO53Zaig+5deG2z8DMpvwgZ8y6nZWps Mar 19 11:43:45.418205 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:45.425897 systemd-logind[1468]: New session 3 of user core. Mar 19 11:43:45.433628 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 19 11:43:45.492463 sshd[1615]: Connection closed by 147.75.109.163 port 37974 Mar 19 11:43:45.493531 sshd-session[1612]: pam_unix(sshd:session): session closed for user core Mar 19 11:43:45.510000 systemd[1]: sshd@2-146.190.145.41:22-147.75.109.163:37974.service: Deactivated successfully. Mar 19 11:43:45.512722 systemd[1]: session-3.scope: Deactivated successfully. Mar 19 11:43:45.515694 systemd-logind[1468]: Session 3 logged out. Waiting for processes to exit. Mar 19 11:43:45.524800 systemd[1]: Started sshd@3-146.190.145.41:22-147.75.109.163:37990.service - OpenSSH per-connection server daemon (147.75.109.163:37990). Mar 19 11:43:45.527315 systemd-logind[1468]: Removed session 3. Mar 19 11:43:45.578345 sshd[1620]: Accepted publickey for core from 147.75.109.163 port 37990 ssh2: RSA SHA256:ZrZa+wczChIMwO53Zaig+5deG2z8DMpvwgZ8y6nZWps Mar 19 11:43:45.580171 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:45.587097 systemd-logind[1468]: New session 4 of user core. Mar 19 11:43:45.592612 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 19 11:43:45.655343 sshd[1623]: Connection closed by 147.75.109.163 port 37990 Mar 19 11:43:45.656375 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Mar 19 11:43:45.671236 systemd[1]: sshd@3-146.190.145.41:22-147.75.109.163:37990.service: Deactivated successfully. Mar 19 11:43:45.673821 systemd[1]: session-4.scope: Deactivated successfully. Mar 19 11:43:45.676542 systemd-logind[1468]: Session 4 logged out. Waiting for processes to exit. Mar 19 11:43:45.681719 systemd[1]: Started sshd@4-146.190.145.41:22-147.75.109.163:37992.service - OpenSSH per-connection server daemon (147.75.109.163:37992). Mar 19 11:43:45.684553 systemd-logind[1468]: Removed session 4. Mar 19 11:43:45.734583 sshd[1628]: Accepted publickey for core from 147.75.109.163 port 37992 ssh2: RSA SHA256:ZrZa+wczChIMwO53Zaig+5deG2z8DMpvwgZ8y6nZWps Mar 19 11:43:45.736599 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:45.744410 systemd-logind[1468]: New session 5 of user core. Mar 19 11:43:45.750587 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 19 11:43:45.822262 sudo[1632]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 19 11:43:45.822637 sudo[1632]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:43:45.838924 sudo[1632]: pam_unix(sudo:session): session closed for user root Mar 19 11:43:45.844320 sshd[1631]: Connection closed by 147.75.109.163 port 37992 Mar 19 11:43:45.843333 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Mar 19 11:43:45.860636 systemd[1]: sshd@4-146.190.145.41:22-147.75.109.163:37992.service: Deactivated successfully. Mar 19 11:43:45.863827 systemd[1]: session-5.scope: Deactivated successfully. Mar 19 11:43:45.865263 systemd-logind[1468]: Session 5 logged out. Waiting for processes to exit. Mar 19 11:43:45.886935 systemd[1]: Started sshd@5-146.190.145.41:22-147.75.109.163:38006.service - OpenSSH per-connection server daemon (147.75.109.163:38006). Mar 19 11:43:45.888784 systemd-logind[1468]: Removed session 5. Mar 19 11:43:45.943546 sshd[1637]: Accepted publickey for core from 147.75.109.163 port 38006 ssh2: RSA SHA256:ZrZa+wczChIMwO53Zaig+5deG2z8DMpvwgZ8y6nZWps Mar 19 11:43:45.945356 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:45.950964 systemd-logind[1468]: New session 6 of user core. Mar 19 11:43:45.957647 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 19 11:43:46.019946 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 19 11:43:46.020301 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:43:46.025982 sudo[1642]: pam_unix(sudo:session): session closed for user root Mar 19 11:43:46.036172 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 19 11:43:46.036749 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:43:46.057851 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:43:46.098397 augenrules[1664]: No rules Mar 19 11:43:46.100252 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:43:46.100601 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:43:46.102428 sudo[1641]: pam_unix(sudo:session): session closed for user root Mar 19 11:43:46.106573 sshd[1640]: Connection closed by 147.75.109.163 port 38006 Mar 19 11:43:46.106912 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Mar 19 11:43:46.121963 systemd[1]: sshd@5-146.190.145.41:22-147.75.109.163:38006.service: Deactivated successfully. Mar 19 11:43:46.124472 systemd[1]: session-6.scope: Deactivated successfully. Mar 19 11:43:46.126801 systemd-logind[1468]: Session 6 logged out. Waiting for processes to exit. Mar 19 11:43:46.130799 systemd[1]: Started sshd@6-146.190.145.41:22-147.75.109.163:38020.service - OpenSSH per-connection server daemon (147.75.109.163:38020). Mar 19 11:43:46.132689 systemd-logind[1468]: Removed session 6. Mar 19 11:43:46.192585 sshd[1672]: Accepted publickey for core from 147.75.109.163 port 38020 ssh2: RSA SHA256:ZrZa+wczChIMwO53Zaig+5deG2z8DMpvwgZ8y6nZWps Mar 19 11:43:46.194611 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:46.203753 systemd-logind[1468]: New session 7 of user core. Mar 19 11:43:46.206635 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 19 11:43:46.269980 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 19 11:43:46.270972 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:43:47.491157 systemd-resolved[1333]: Clock change detected. Flushing caches. Mar 19 11:43:47.491216 systemd-timesyncd[1382]: Contacted time server 66.42.71.197:123 (0.flatcar.pool.ntp.org). Mar 19 11:43:47.491298 systemd-timesyncd[1382]: Initial clock synchronization to Wed 2025-03-19 11:43:47.490936 UTC. Mar 19 11:43:47.528306 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:43:47.528479 systemd[1]: kubelet.service: Consumed 1.320s CPU time, 252.4M memory peak. Mar 19 11:43:47.545979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:43:47.588330 systemd[1]: Reload requested from client PID 1709 ('systemctl') (unit session-7.scope)... Mar 19 11:43:47.588553 systemd[1]: Reloading... Mar 19 11:43:47.764593 zram_generator::config[1752]: No configuration found. Mar 19 11:43:47.925788 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:43:48.047719 systemd[1]: Reloading finished in 458 ms. Mar 19 11:43:48.101599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:43:48.107343 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:43:48.110152 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:43:48.110462 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:43:48.110544 systemd[1]: kubelet.service: Consumed 123ms CPU time, 91.7M memory peak. Mar 19 11:43:48.116163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:43:48.249804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:43:48.254252 (kubelet)[1808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:43:48.327389 kubelet[1808]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:43:48.327389 kubelet[1808]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 19 11:43:48.327389 kubelet[1808]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:43:48.327960 kubelet[1808]: I0319 11:43:48.327444 1808 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:43:48.989652 kubelet[1808]: I0319 11:43:48.989591 1808 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 19 11:43:48.989652 kubelet[1808]: I0319 11:43:48.989636 1808 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:43:48.990088 kubelet[1808]: I0319 11:43:48.990060 1808 server.go:954] "Client rotation is on, will bootstrap in background" Mar 19 11:43:49.015062 kubelet[1808]: I0319 11:43:49.014222 1808 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:43:49.026603 kubelet[1808]: E0319 11:43:49.026517 1808 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 19 11:43:49.026603 kubelet[1808]: I0319 11:43:49.026587 1808 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 19 11:43:49.033382 kubelet[1808]: I0319 11:43:49.032163 1808 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:43:49.033382 kubelet[1808]: I0319 11:43:49.032475 1808 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:43:49.033382 kubelet[1808]: I0319 11:43:49.032529 1808 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"146.190.145.41","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:43:49.033382 kubelet[1808]: I0319 11:43:49.032910 1808 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:43:49.033805 kubelet[1808]: I0319 11:43:49.032927 1808 container_manager_linux.go:304] "Creating device plugin manager" Mar 19 11:43:49.033805 kubelet[1808]: I0319 11:43:49.033107 1808 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:43:49.039299 kubelet[1808]: I0319 11:43:49.038830 1808 kubelet.go:446] "Attempting to sync node with API server" Mar 19 11:43:49.039299 kubelet[1808]: I0319 11:43:49.038885 1808 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:43:49.039299 kubelet[1808]: I0319 11:43:49.038944 1808 kubelet.go:352] "Adding apiserver pod source" Mar 19 11:43:49.039299 kubelet[1808]: I0319 11:43:49.038963 1808 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:43:49.042893 kubelet[1808]: E0319 11:43:49.042848 1808 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:43:49.043510 kubelet[1808]: E0319 11:43:49.043467 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:43:49.044138 kubelet[1808]: I0319 11:43:49.044116 1808 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:43:49.044988 kubelet[1808]: I0319 11:43:49.044965 1808 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:43:49.046311 kubelet[1808]: W0319 11:43:49.045946 1808 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 19 11:43:49.048439 kubelet[1808]: I0319 11:43:49.048364 1808 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 19 11:43:49.048439 kubelet[1808]: I0319 11:43:49.048420 1808 server.go:1287] "Started kubelet" Mar 19 11:43:49.049527 kubelet[1808]: I0319 11:43:49.048776 1808 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:43:49.050580 kubelet[1808]: I0319 11:43:49.050557 1808 server.go:490] "Adding debug handlers to kubelet server" Mar 19 11:43:49.053788 kubelet[1808]: I0319 11:43:49.053756 1808 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:43:49.054235 kubelet[1808]: I0319 11:43:49.054159 1808 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:43:49.054539 kubelet[1808]: I0319 11:43:49.054515 1808 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:43:49.058354 kubelet[1808]: I0319 11:43:49.058007 1808 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 19 11:43:49.063090 kubelet[1808]: E0319 11:43:49.063041 1808 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:43:49.063781 kubelet[1808]: E0319 11:43:49.063218 1808 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"146.190.145.41\" not found" Mar 19 11:43:49.063781 kubelet[1808]: I0319 11:43:49.063243 1808 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 19 11:43:49.063781 kubelet[1808]: I0319 11:43:49.063471 1808 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 19 11:43:49.063781 kubelet[1808]: I0319 11:43:49.063539 1808 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:43:49.065437 kubelet[1808]: I0319 11:43:49.065397 1808 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:43:49.065437 kubelet[1808]: I0319 11:43:49.065432 1808 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:43:49.065622 kubelet[1808]: I0319 11:43:49.065534 1808 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:43:49.078527 kubelet[1808]: E0319 11:43:49.076053 1808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{146.190.145.41.182e3196ac87a61f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:146.190.145.41,UID:146.190.145.41,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:146.190.145.41,},FirstTimestamp:2025-03-19 11:43:49.048387103 +0000 UTC m=+0.788071970,LastTimestamp:2025-03-19 11:43:49.048387103 +0000 UTC m=+0.788071970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:146.190.145.41,}" Mar 19 11:43:49.078527 kubelet[1808]: W0319 11:43:49.078229 1808 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "146.190.145.41" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 19 11:43:49.078527 kubelet[1808]: E0319 11:43:49.078286 1808 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"146.190.145.41\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 19 11:43:49.078527 kubelet[1808]: W0319 11:43:49.078357 1808 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 19 11:43:49.078527 kubelet[1808]: E0319 11:43:49.078377 1808 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 19 11:43:49.081436 kubelet[1808]: W0319 11:43:49.081396 1808 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 19 11:43:49.081654 kubelet[1808]: E0319 11:43:49.081628 1808 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 19 11:43:49.081895 kubelet[1808]: E0319 11:43:49.081871 1808 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"146.190.145.41\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Mar 19 11:43:49.094061 kubelet[1808]: I0319 11:43:49.094021 1808 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 19 11:43:49.094293 kubelet[1808]: I0319 11:43:49.094269 1808 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 19 11:43:49.094554 kubelet[1808]: I0319 11:43:49.094436 1808 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:43:49.094554 kubelet[1808]: E0319 11:43:49.094118 1808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{146.190.145.41.182e3196ad66f99f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:146.190.145.41,UID:146.190.145.41,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:146.190.145.41,},FirstTimestamp:2025-03-19 11:43:49.063023007 +0000 UTC m=+0.802707875,LastTimestamp:2025-03-19 11:43:49.063023007 +0000 UTC m=+0.802707875,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:146.190.145.41,}" Mar 19 11:43:49.099153 kubelet[1808]: I0319 11:43:49.099061 1808 policy_none.go:49] "None policy: Start" Mar 19 11:43:49.099153 kubelet[1808]: I0319 11:43:49.099145 1808 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 19 11:43:49.099347 kubelet[1808]: I0319 11:43:49.099170 1808 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:43:49.109477 kubelet[1808]: E0319 11:43:49.109347 1808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{146.190.145.41.182e3196aedf9f1c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:146.190.145.41,UID:146.190.145.41,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 146.190.145.41 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:146.190.145.41,},FirstTimestamp:2025-03-19 11:43:49.087706908 +0000 UTC m=+0.827391772,LastTimestamp:2025-03-19 11:43:49.087706908 +0000 UTC m=+0.827391772,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:146.190.145.41,}" Mar 19 11:43:49.111163 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 19 11:43:49.125637 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 19 11:43:49.127564 kubelet[1808]: E0319 11:43:49.126904 1808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{146.190.145.41.182e3196aee063c8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:146.190.145.41,UID:146.190.145.41,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 146.190.145.41 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:146.190.145.41,},FirstTimestamp:2025-03-19 11:43:49.087757256 +0000 UTC m=+0.827442143,LastTimestamp:2025-03-19 11:43:49.087757256 +0000 UTC m=+0.827442143,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:146.190.145.41,}" Mar 19 11:43:49.134518 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 19 11:43:49.144049 kubelet[1808]: I0319 11:43:49.144015 1808 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:43:49.144951 kubelet[1808]: I0319 11:43:49.144930 1808 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:43:49.145546 kubelet[1808]: I0319 11:43:49.145242 1808 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:43:49.147152 kubelet[1808]: I0319 11:43:49.146995 1808 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:43:49.150177 kubelet[1808]: E0319 11:43:49.149454 1808 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 19 11:43:49.150177 kubelet[1808]: E0319 11:43:49.149515 1808 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"146.190.145.41\" not found" Mar 19 11:43:49.163706 kubelet[1808]: E0319 11:43:49.163587 1808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{146.190.145.41.182e3196aee074df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:146.190.145.41,UID:146.190.145.41,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 146.190.145.41 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:146.190.145.41,},FirstTimestamp:2025-03-19 11:43:49.087761631 +0000 UTC m=+0.827446501,LastTimestamp:2025-03-19 11:43:49.087761631 +0000 UTC m=+0.827446501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:146.190.145.41,}" Mar 19 11:43:49.172924 kubelet[1808]: I0319 11:43:49.172801 1808 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:43:49.175472 kubelet[1808]: I0319 11:43:49.175433 1808 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:43:49.175931 kubelet[1808]: I0319 11:43:49.175578 1808 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 19 11:43:49.175931 kubelet[1808]: I0319 11:43:49.175602 1808 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 19 11:43:49.175931 kubelet[1808]: I0319 11:43:49.175611 1808 kubelet.go:2388] "Starting kubelet main sync loop" Mar 19 11:43:49.175931 kubelet[1808]: E0319 11:43:49.175672 1808 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 19 11:43:49.247228 kubelet[1808]: I0319 11:43:49.246894 1808 kubelet_node_status.go:76] "Attempting to register node" node="146.190.145.41" Mar 19 11:43:49.269234 kubelet[1808]: I0319 11:43:49.269188 1808 kubelet_node_status.go:79] "Successfully registered node" node="146.190.145.41" Mar 19 11:43:49.269234 kubelet[1808]: E0319 11:43:49.269230 1808 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"146.190.145.41\": node \"146.190.145.41\" not found" Mar 19 11:43:49.298177 kubelet[1808]: E0319 11:43:49.298142 1808 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"146.190.145.41\" not found" Mar 19 11:43:49.399133 kubelet[1808]: E0319 11:43:49.399074 1808 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"146.190.145.41\" not found" Mar 19 11:43:49.414988 sudo[1676]: pam_unix(sudo:session): session closed for user root Mar 19 11:43:49.420416 sshd[1675]: Connection closed by 147.75.109.163 port 38020 Mar 19 11:43:49.419447 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Mar 19 11:43:49.424284 systemd[1]: sshd@6-146.190.145.41:22-147.75.109.163:38020.service: Deactivated successfully. Mar 19 11:43:49.429582 systemd[1]: session-7.scope: Deactivated successfully. Mar 19 11:43:49.430373 systemd[1]: session-7.scope: Consumed 635ms CPU time, 73.8M memory peak. Mar 19 11:43:49.433843 systemd-logind[1468]: Session 7 logged out. Waiting for processes to exit. Mar 19 11:43:49.435481 systemd-logind[1468]: Removed session 7. Mar 19 11:43:49.500376 kubelet[1808]: E0319 11:43:49.500183 1808 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"146.190.145.41\" not found" Mar 19 11:43:49.600961 kubelet[1808]: E0319 11:43:49.600868 1808 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"146.190.145.41\" not found" Mar 19 11:43:49.702170 kubelet[1808]: E0319 11:43:49.702062 1808 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"146.190.145.41\" not found" Mar 19 11:43:49.803348 kubelet[1808]: E0319 11:43:49.803153 1808 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"146.190.145.41\" not found" Mar 19 11:43:49.903433 kubelet[1808]: E0319 11:43:49.903320 1808 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"146.190.145.41\" not found" Mar 19 11:43:49.992522 kubelet[1808]: I0319 11:43:49.992401 1808 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 19 11:43:49.992929 kubelet[1808]: W0319 11:43:49.992883 1808 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 19 11:43:50.004276 kubelet[1808]: E0319 11:43:50.004158 1808 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"146.190.145.41\" not found" Mar 19 11:43:50.044937 kubelet[1808]: E0319 11:43:50.044866 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:43:50.106026 kubelet[1808]: I0319 11:43:50.105478 1808 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 19 11:43:50.106556 containerd[1479]: time="2025-03-19T11:43:50.106409499Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 19 11:43:50.107256 kubelet[1808]: I0319 11:43:50.106696 1808 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 19 11:43:51.045733 kubelet[1808]: E0319 11:43:51.045668 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:43:51.045733 kubelet[1808]: I0319 11:43:51.045679 1808 apiserver.go:52] "Watching apiserver" Mar 19 11:43:51.064429 kubelet[1808]: E0319 11:43:51.063467 1808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tkgw6" podUID="28cc52f2-b4a5-4277-864b-64eb1318d7af" Mar 19 11:43:51.064429 kubelet[1808]: I0319 11:43:51.063877 1808 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 19 11:43:51.072853 systemd[1]: Created slice kubepods-besteffort-pod88ee1369_af48_4d58_80bc_54697ac34cbe.slice - libcontainer container kubepods-besteffort-pod88ee1369_af48_4d58_80bc_54697ac34cbe.slice. Mar 19 11:43:51.077261 kubelet[1808]: I0319 11:43:51.077122 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/88ee1369-af48-4d58-80bc-54697ac34cbe-cni-bin-dir\") pod \"calico-node-xqqjg\" (UID: \"88ee1369-af48-4d58-80bc-54697ac34cbe\") " pod="calico-system/calico-node-xqqjg" Mar 19 11:43:51.077261 kubelet[1808]: I0319 11:43:51.077187 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/88ee1369-af48-4d58-80bc-54697ac34cbe-cni-net-dir\") pod \"calico-node-xqqjg\" (UID: \"88ee1369-af48-4d58-80bc-54697ac34cbe\") " pod="calico-system/calico-node-xqqjg" Mar 19 11:43:51.077423 kubelet[1808]: I0319 11:43:51.077301 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/88ee1369-af48-4d58-80bc-54697ac34cbe-cni-log-dir\") pod \"calico-node-xqqjg\" (UID: \"88ee1369-af48-4d58-80bc-54697ac34cbe\") " pod="calico-system/calico-node-xqqjg" Mar 19 11:43:51.077423 kubelet[1808]: I0319 11:43:51.077354 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/88ee1369-af48-4d58-80bc-54697ac34cbe-flexvol-driver-host\") pod \"calico-node-xqqjg\" (UID: \"88ee1369-af48-4d58-80bc-54697ac34cbe\") " pod="calico-system/calico-node-xqqjg" Mar 19 11:43:51.077423 kubelet[1808]: I0319 11:43:51.077374 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/28cc52f2-b4a5-4277-864b-64eb1318d7af-socket-dir\") pod \"csi-node-driver-tkgw6\" (UID: \"28cc52f2-b4a5-4277-864b-64eb1318d7af\") " pod="calico-system/csi-node-driver-tkgw6" Mar 19 11:43:51.077423 kubelet[1808]: I0319 11:43:51.077390 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/28cc52f2-b4a5-4277-864b-64eb1318d7af-registration-dir\") pod \"csi-node-driver-tkgw6\" (UID: \"28cc52f2-b4a5-4277-864b-64eb1318d7af\") " pod="calico-system/csi-node-driver-tkgw6" Mar 19 11:43:51.077423 kubelet[1808]: I0319 11:43:51.077406 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a4aa789-3717-4114-83b7-8a0744d19e29-xtables-lock\") pod \"kube-proxy-9nz6z\" (UID: \"3a4aa789-3717-4114-83b7-8a0744d19e29\") " pod="kube-system/kube-proxy-9nz6z" Mar 19 11:43:51.077695 kubelet[1808]: I0319 11:43:51.077430 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88ee1369-af48-4d58-80bc-54697ac34cbe-lib-modules\") pod \"calico-node-xqqjg\" (UID: \"88ee1369-af48-4d58-80bc-54697ac34cbe\") " pod="calico-system/calico-node-xqqjg" Mar 19 11:43:51.077695 kubelet[1808]: I0319 11:43:51.077451 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88ee1369-af48-4d58-80bc-54697ac34cbe-xtables-lock\") pod \"calico-node-xqqjg\" (UID: \"88ee1369-af48-4d58-80bc-54697ac34cbe\") " pod="calico-system/calico-node-xqqjg" Mar 19 11:43:51.077695 kubelet[1808]: I0319 11:43:51.077465 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/88ee1369-af48-4d58-80bc-54697ac34cbe-policysync\") pod \"calico-node-xqqjg\" (UID: \"88ee1369-af48-4d58-80bc-54697ac34cbe\") " pod="calico-system/calico-node-xqqjg" Mar 19 11:43:51.077695 kubelet[1808]: I0319 11:43:51.077482 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/88ee1369-af48-4d58-80bc-54697ac34cbe-var-lib-calico\") pod \"calico-node-xqqjg\" (UID: \"88ee1369-af48-4d58-80bc-54697ac34cbe\") " pod="calico-system/calico-node-xqqjg" Mar 19 11:43:51.077695 kubelet[1808]: I0319 11:43:51.077511 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/28cc52f2-b4a5-4277-864b-64eb1318d7af-varrun\") pod \"csi-node-driver-tkgw6\" (UID: \"28cc52f2-b4a5-4277-864b-64eb1318d7af\") " pod="calico-system/csi-node-driver-tkgw6" Mar 19 11:43:51.077891 kubelet[1808]: I0319 11:43:51.077528 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kczpn\" (UniqueName: \"kubernetes.io/projected/3a4aa789-3717-4114-83b7-8a0744d19e29-kube-api-access-kczpn\") pod \"kube-proxy-9nz6z\" (UID: \"3a4aa789-3717-4114-83b7-8a0744d19e29\") " pod="kube-system/kube-proxy-9nz6z" Mar 19 11:43:51.077891 kubelet[1808]: I0319 11:43:51.077544 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88ee1369-af48-4d58-80bc-54697ac34cbe-tigera-ca-bundle\") pod \"calico-node-xqqjg\" (UID: \"88ee1369-af48-4d58-80bc-54697ac34cbe\") " pod="calico-system/calico-node-xqqjg" Mar 19 11:43:51.077891 kubelet[1808]: I0319 11:43:51.077558 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/88ee1369-af48-4d58-80bc-54697ac34cbe-node-certs\") pod \"calico-node-xqqjg\" (UID: \"88ee1369-af48-4d58-80bc-54697ac34cbe\") " pod="calico-system/calico-node-xqqjg" Mar 19 11:43:51.077891 kubelet[1808]: I0319 11:43:51.077582 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/88ee1369-af48-4d58-80bc-54697ac34cbe-var-run-calico\") pod \"calico-node-xqqjg\" (UID: \"88ee1369-af48-4d58-80bc-54697ac34cbe\") " pod="calico-system/calico-node-xqqjg" Mar 19 11:43:51.077891 kubelet[1808]: I0319 11:43:51.077597 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pjzl\" (UniqueName: \"kubernetes.io/projected/88ee1369-af48-4d58-80bc-54697ac34cbe-kube-api-access-5pjzl\") pod \"calico-node-xqqjg\" (UID: \"88ee1369-af48-4d58-80bc-54697ac34cbe\") " pod="calico-system/calico-node-xqqjg" Mar 19 11:43:51.078075 kubelet[1808]: I0319 11:43:51.077630 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a4aa789-3717-4114-83b7-8a0744d19e29-kube-proxy\") pod \"kube-proxy-9nz6z\" (UID: \"3a4aa789-3717-4114-83b7-8a0744d19e29\") " pod="kube-system/kube-proxy-9nz6z" Mar 19 11:43:51.078075 kubelet[1808]: I0319 11:43:51.077654 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/28cc52f2-b4a5-4277-864b-64eb1318d7af-kubelet-dir\") pod \"csi-node-driver-tkgw6\" (UID: \"28cc52f2-b4a5-4277-864b-64eb1318d7af\") " pod="calico-system/csi-node-driver-tkgw6" Mar 19 11:43:51.078075 kubelet[1808]: I0319 11:43:51.077688 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cthb\" (UniqueName: \"kubernetes.io/projected/28cc52f2-b4a5-4277-864b-64eb1318d7af-kube-api-access-7cthb\") pod \"csi-node-driver-tkgw6\" (UID: \"28cc52f2-b4a5-4277-864b-64eb1318d7af\") " pod="calico-system/csi-node-driver-tkgw6" Mar 19 11:43:51.078075 kubelet[1808]: I0319 11:43:51.077762 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a4aa789-3717-4114-83b7-8a0744d19e29-lib-modules\") pod \"kube-proxy-9nz6z\" (UID: \"3a4aa789-3717-4114-83b7-8a0744d19e29\") " pod="kube-system/kube-proxy-9nz6z" Mar 19 11:43:51.102027 systemd[1]: Created slice kubepods-besteffort-pod3a4aa789_3717_4114_83b7_8a0744d19e29.slice - libcontainer container kubepods-besteffort-pod3a4aa789_3717_4114_83b7_8a0744d19e29.slice. Mar 19 11:43:51.180926 kubelet[1808]: E0319 11:43:51.180884 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.181162 kubelet[1808]: W0319 11:43:51.181133 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.181332 kubelet[1808]: E0319 11:43:51.181270 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.182241 kubelet[1808]: E0319 11:43:51.181785 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.182241 kubelet[1808]: W0319 11:43:51.181806 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.182241 kubelet[1808]: E0319 11:43:51.181828 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.183075 kubelet[1808]: E0319 11:43:51.182974 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.183075 kubelet[1808]: W0319 11:43:51.182994 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.183075 kubelet[1808]: E0319 11:43:51.183035 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.183433 kubelet[1808]: E0319 11:43:51.183400 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.183433 kubelet[1808]: W0319 11:43:51.183425 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.183581 kubelet[1808]: E0319 11:43:51.183452 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.183739 kubelet[1808]: E0319 11:43:51.183706 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.183739 kubelet[1808]: W0319 11:43:51.183723 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.183739 kubelet[1808]: E0319 11:43:51.183734 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.183940 kubelet[1808]: E0319 11:43:51.183923 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.183940 kubelet[1808]: W0319 11:43:51.183936 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.184100 kubelet[1808]: E0319 11:43:51.184086 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.184209 kubelet[1808]: E0319 11:43:51.184194 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.184209 kubelet[1808]: W0319 11:43:51.184208 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.184357 kubelet[1808]: E0319 11:43:51.184317 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.184444 kubelet[1808]: E0319 11:43:51.184414 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.184444 kubelet[1808]: W0319 11:43:51.184424 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.184569 kubelet[1808]: E0319 11:43:51.184503 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.184902 kubelet[1808]: E0319 11:43:51.184858 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.184902 kubelet[1808]: W0319 11:43:51.184890 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.185072 kubelet[1808]: E0319 11:43:51.185034 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.185155 kubelet[1808]: E0319 11:43:51.185125 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.185155 kubelet[1808]: W0319 11:43:51.185137 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.185264 kubelet[1808]: E0319 11:43:51.185224 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.185422 kubelet[1808]: E0319 11:43:51.185407 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.185422 kubelet[1808]: W0319 11:43:51.185418 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.185621 kubelet[1808]: E0319 11:43:51.185579 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.185702 kubelet[1808]: E0319 11:43:51.185680 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.185702 kubelet[1808]: W0319 11:43:51.185691 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.185978 kubelet[1808]: E0319 11:43:51.185814 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.186057 kubelet[1808]: E0319 11:43:51.186020 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.186057 kubelet[1808]: W0319 11:43:51.186032 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.186977 kubelet[1808]: E0319 11:43:51.186165 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.186977 kubelet[1808]: E0319 11:43:51.186295 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.186977 kubelet[1808]: W0319 11:43:51.186305 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.186977 kubelet[1808]: E0319 11:43:51.186565 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.186977 kubelet[1808]: W0319 11:43:51.186580 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.186977 kubelet[1808]: E0319 11:43:51.186786 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.186977 kubelet[1808]: W0319 11:43:51.186798 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.186977 kubelet[1808]: E0319 11:43:51.186895 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.186977 kubelet[1808]: E0319 11:43:51.186917 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.186977 kubelet[1808]: E0319 11:43:51.186949 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.187417 kubelet[1808]: E0319 11:43:51.187040 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.187417 kubelet[1808]: W0319 11:43:51.187052 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.187417 kubelet[1808]: E0319 11:43:51.187137 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.187417 kubelet[1808]: E0319 11:43:51.187310 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.187417 kubelet[1808]: W0319 11:43:51.187321 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.187417 kubelet[1808]: E0319 11:43:51.187402 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.187694 kubelet[1808]: E0319 11:43:51.187670 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.187694 kubelet[1808]: W0319 11:43:51.187693 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.187901 kubelet[1808]: E0319 11:43:51.187881 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.188256 kubelet[1808]: E0319 11:43:51.188219 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.188256 kubelet[1808]: W0319 11:43:51.188244 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.188357 kubelet[1808]: E0319 11:43:51.188271 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.188598 kubelet[1808]: E0319 11:43:51.188578 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.188598 kubelet[1808]: W0319 11:43:51.188595 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.188897 kubelet[1808]: E0319 11:43:51.188618 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.189274 kubelet[1808]: E0319 11:43:51.189140 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.189274 kubelet[1808]: W0319 11:43:51.189156 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.189274 kubelet[1808]: E0319 11:43:51.189244 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.189590 kubelet[1808]: E0319 11:43:51.189562 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.189590 kubelet[1808]: W0319 11:43:51.189581 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.189707 kubelet[1808]: E0319 11:43:51.189667 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.189884 kubelet[1808]: E0319 11:43:51.189870 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.189884 kubelet[1808]: W0319 11:43:51.189885 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.190013 kubelet[1808]: E0319 11:43:51.189968 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.190200 kubelet[1808]: E0319 11:43:51.190182 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.190200 kubelet[1808]: W0319 11:43:51.190197 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.190305 kubelet[1808]: E0319 11:43:51.190283 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.190717 kubelet[1808]: E0319 11:43:51.190693 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.190717 kubelet[1808]: W0319 11:43:51.190716 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.190932 kubelet[1808]: E0319 11:43:51.190898 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.191171 kubelet[1808]: E0319 11:43:51.191148 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.191171 kubelet[1808]: W0319 11:43:51.191167 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.191701 kubelet[1808]: E0319 11:43:51.191208 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.191701 kubelet[1808]: E0319 11:43:51.191456 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.191701 kubelet[1808]: W0319 11:43:51.191470 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.191832 kubelet[1808]: E0319 11:43:51.191803 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.191832 kubelet[1808]: W0319 11:43:51.191816 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.192039 kubelet[1808]: E0319 11:43:51.192015 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.192182 kubelet[1808]: E0319 11:43:51.192129 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.192182 kubelet[1808]: W0319 11:43:51.192147 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.192182 kubelet[1808]: E0319 11:43:51.192163 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.193099 kubelet[1808]: E0319 11:43:51.192164 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.203264 kubelet[1808]: E0319 11:43:51.200963 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.203264 kubelet[1808]: W0319 11:43:51.200995 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.203264 kubelet[1808]: E0319 11:43:51.201020 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.228907 kubelet[1808]: E0319 11:43:51.228838 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.228907 kubelet[1808]: W0319 11:43:51.228885 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.228907 kubelet[1808]: E0319 11:43:51.228921 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.246156 kubelet[1808]: E0319 11:43:51.246099 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.246300 kubelet[1808]: W0319 11:43:51.246139 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.246300 kubelet[1808]: E0319 11:43:51.246295 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.253721 kubelet[1808]: E0319 11:43:51.250440 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:51.253721 kubelet[1808]: W0319 11:43:51.250466 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:51.253721 kubelet[1808]: E0319 11:43:51.250515 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:51.391179 kubelet[1808]: E0319 11:43:51.390676 1808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 19 11:43:51.392986 containerd[1479]: time="2025-03-19T11:43:51.392473645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xqqjg,Uid:88ee1369-af48-4d58-80bc-54697ac34cbe,Namespace:calico-system,Attempt:0,}" Mar 19 11:43:51.411841 kubelet[1808]: E0319 11:43:51.411473 1808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 19 11:43:51.412180 containerd[1479]: time="2025-03-19T11:43:51.412143533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9nz6z,Uid:3a4aa789-3717-4114-83b7-8a0744d19e29,Namespace:kube-system,Attempt:0,}" Mar 19 11:43:51.981524 containerd[1479]: time="2025-03-19T11:43:51.979837222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:43:51.985183 containerd[1479]: time="2025-03-19T11:43:51.985121106Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:43:51.988409 containerd[1479]: time="2025-03-19T11:43:51.988332074Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 19 11:43:51.990001 containerd[1479]: time="2025-03-19T11:43:51.989865027Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:43:51.990622 containerd[1479]: time="2025-03-19T11:43:51.990575353Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:43:51.996562 containerd[1479]: time="2025-03-19T11:43:51.995427852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:43:51.997033 containerd[1479]: time="2025-03-19T11:43:51.996575847Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 603.214347ms" Mar 19 11:43:52.001425 containerd[1479]: time="2025-03-19T11:43:52.001361625Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 589.119578ms" Mar 19 11:43:52.047772 kubelet[1808]: E0319 11:43:52.047680 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:43:52.187942 containerd[1479]: time="2025-03-19T11:43:52.186156185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:43:52.187942 containerd[1479]: time="2025-03-19T11:43:52.186242823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:43:52.187942 containerd[1479]: time="2025-03-19T11:43:52.186263256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:43:52.187942 containerd[1479]: time="2025-03-19T11:43:52.186364165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:43:52.192164 containerd[1479]: time="2025-03-19T11:43:52.190719838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:43:52.192164 containerd[1479]: time="2025-03-19T11:43:52.190923038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:43:52.192164 containerd[1479]: time="2025-03-19T11:43:52.191029419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:43:52.192164 containerd[1479]: time="2025-03-19T11:43:52.191342496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:43:52.215316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3757104081.mount: Deactivated successfully. Mar 19 11:43:52.321031 systemd[1]: Started cri-containerd-2b5b965832a073b6c09559296f88cae5c2b0daeeb58a594c0fd7c1c173de15fe.scope - libcontainer container 2b5b965832a073b6c09559296f88cae5c2b0daeeb58a594c0fd7c1c173de15fe. Mar 19 11:43:52.324178 systemd[1]: Started cri-containerd-a25745fcc1f81d7eb54ce3bab3ed976eded201e3dbc5c0c3a52b2fe3beb1ad63.scope - libcontainer container a25745fcc1f81d7eb54ce3bab3ed976eded201e3dbc5c0c3a52b2fe3beb1ad63. Mar 19 11:43:52.370000 containerd[1479]: time="2025-03-19T11:43:52.369839394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9nz6z,Uid:3a4aa789-3717-4114-83b7-8a0744d19e29,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b5b965832a073b6c09559296f88cae5c2b0daeeb58a594c0fd7c1c173de15fe\"" Mar 19 11:43:52.375232 kubelet[1808]: E0319 11:43:52.374756 1808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 19 11:43:52.377828 containerd[1479]: time="2025-03-19T11:43:52.377540868Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 19 11:43:52.383696 containerd[1479]: time="2025-03-19T11:43:52.383149955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xqqjg,Uid:88ee1369-af48-4d58-80bc-54697ac34cbe,Namespace:calico-system,Attempt:0,} returns sandbox id \"a25745fcc1f81d7eb54ce3bab3ed976eded201e3dbc5c0c3a52b2fe3beb1ad63\"" Mar 19 11:43:52.384803 kubelet[1808]: E0319 11:43:52.384470 1808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 19 11:43:53.048217 kubelet[1808]: E0319 11:43:53.048154 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:43:53.177286 kubelet[1808]: E0319 11:43:53.176190 1808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tkgw6" podUID="28cc52f2-b4a5-4277-864b-64eb1318d7af" Mar 19 11:43:53.302829 systemd-resolved[1333]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Mar 19 11:43:53.618056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1593412508.mount: Deactivated successfully. Mar 19 11:43:54.050151 kubelet[1808]: E0319 11:43:54.049932 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:43:54.322850 containerd[1479]: time="2025-03-19T11:43:54.321967703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:54.324092 containerd[1479]: time="2025-03-19T11:43:54.324010036Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=30918185" Mar 19 11:43:54.325444 containerd[1479]: time="2025-03-19T11:43:54.325283973Z" level=info msg="ImageCreate event name:\"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:54.328410 containerd[1479]: time="2025-03-19T11:43:54.328341866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:54.330228 containerd[1479]: time="2025-03-19T11:43:54.329757799Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"30917204\" in 1.952168839s" Mar 19 11:43:54.330228 containerd[1479]: time="2025-03-19T11:43:54.329819312Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\"" Mar 19 11:43:54.333707 containerd[1479]: time="2025-03-19T11:43:54.332378980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 19 11:43:54.336538 containerd[1479]: time="2025-03-19T11:43:54.336460525Z" level=info msg="CreateContainer within sandbox \"2b5b965832a073b6c09559296f88cae5c2b0daeeb58a594c0fd7c1c173de15fe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 19 11:43:54.360422 containerd[1479]: time="2025-03-19T11:43:54.360204891Z" level=info msg="CreateContainer within sandbox \"2b5b965832a073b6c09559296f88cae5c2b0daeeb58a594c0fd7c1c173de15fe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"da9bbaa739c390d180f2604137314f24ece967a6170265fe9f3370790dd06020\"" Mar 19 11:43:54.361564 containerd[1479]: time="2025-03-19T11:43:54.361519605Z" level=info msg="StartContainer for \"da9bbaa739c390d180f2604137314f24ece967a6170265fe9f3370790dd06020\"" Mar 19 11:43:54.414777 systemd[1]: Started cri-containerd-da9bbaa739c390d180f2604137314f24ece967a6170265fe9f3370790dd06020.scope - libcontainer container da9bbaa739c390d180f2604137314f24ece967a6170265fe9f3370790dd06020. Mar 19 11:43:54.469077 containerd[1479]: time="2025-03-19T11:43:54.469017604Z" level=info msg="StartContainer for \"da9bbaa739c390d180f2604137314f24ece967a6170265fe9f3370790dd06020\" returns successfully" Mar 19 11:43:55.050594 kubelet[1808]: E0319 11:43:55.050503 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:43:55.177631 kubelet[1808]: E0319 11:43:55.177477 1808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tkgw6" podUID="28cc52f2-b4a5-4277-864b-64eb1318d7af" Mar 19 11:43:55.196980 kubelet[1808]: E0319 11:43:55.195963 1808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 19 11:43:55.292959 kubelet[1808]: E0319 11:43:55.292901 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.293256 kubelet[1808]: W0319 11:43:55.293220 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.293412 kubelet[1808]: E0319 11:43:55.293389 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.294032 kubelet[1808]: E0319 11:43:55.293934 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.294032 kubelet[1808]: W0319 11:43:55.293968 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.294032 kubelet[1808]: E0319 11:43:55.293992 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.294875 kubelet[1808]: E0319 11:43:55.294674 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.294875 kubelet[1808]: W0319 11:43:55.294693 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.294875 kubelet[1808]: E0319 11:43:55.294709 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.295146 kubelet[1808]: E0319 11:43:55.295110 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.295220 kubelet[1808]: W0319 11:43:55.295206 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.295374 kubelet[1808]: E0319 11:43:55.295296 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.295928 kubelet[1808]: E0319 11:43:55.295826 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.295928 kubelet[1808]: W0319 11:43:55.295854 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.295928 kubelet[1808]: E0319 11:43:55.295868 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.296458 kubelet[1808]: E0319 11:43:55.296324 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.296458 kubelet[1808]: W0319 11:43:55.296342 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.296458 kubelet[1808]: E0319 11:43:55.296373 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.297531 kubelet[1808]: E0319 11:43:55.297054 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.297531 kubelet[1808]: W0319 11:43:55.297082 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.297531 kubelet[1808]: E0319 11:43:55.297096 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.297531 kubelet[1808]: E0319 11:43:55.297364 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.297531 kubelet[1808]: W0319 11:43:55.297374 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.297531 kubelet[1808]: E0319 11:43:55.297386 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.298299 kubelet[1808]: E0319 11:43:55.298276 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.298299 kubelet[1808]: W0319 11:43:55.298296 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.298449 kubelet[1808]: E0319 11:43:55.298320 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.298761 kubelet[1808]: E0319 11:43:55.298743 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.298825 kubelet[1808]: W0319 11:43:55.298769 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.298825 kubelet[1808]: E0319 11:43:55.298800 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.299075 kubelet[1808]: E0319 11:43:55.299059 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.299075 kubelet[1808]: W0319 11:43:55.299074 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.299181 kubelet[1808]: E0319 11:43:55.299092 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.299346 kubelet[1808]: E0319 11:43:55.299330 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.299346 kubelet[1808]: W0319 11:43:55.299345 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.299444 kubelet[1808]: E0319 11:43:55.299358 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.299698 kubelet[1808]: E0319 11:43:55.299678 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.299698 kubelet[1808]: W0319 11:43:55.299696 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.299814 kubelet[1808]: E0319 11:43:55.299711 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.299986 kubelet[1808]: E0319 11:43:55.299969 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.300041 kubelet[1808]: W0319 11:43:55.299985 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.300041 kubelet[1808]: E0319 11:43:55.300000 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.300271 kubelet[1808]: E0319 11:43:55.300252 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.300271 kubelet[1808]: W0319 11:43:55.300270 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.300387 kubelet[1808]: E0319 11:43:55.300285 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.300837 kubelet[1808]: E0319 11:43:55.300597 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.300837 kubelet[1808]: W0319 11:43:55.300614 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.300837 kubelet[1808]: E0319 11:43:55.300644 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.301003 kubelet[1808]: E0319 11:43:55.300946 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.301003 kubelet[1808]: W0319 11:43:55.300960 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.301003 kubelet[1808]: E0319 11:43:55.300974 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.302849 kubelet[1808]: E0319 11:43:55.302821 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.302849 kubelet[1808]: W0319 11:43:55.302845 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.303018 kubelet[1808]: E0319 11:43:55.302866 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.303344 kubelet[1808]: E0319 11:43:55.303313 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.303344 kubelet[1808]: W0319 11:43:55.303333 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.303455 kubelet[1808]: E0319 11:43:55.303352 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.303722 kubelet[1808]: E0319 11:43:55.303682 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.303722 kubelet[1808]: W0319 11:43:55.303700 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.303722 kubelet[1808]: E0319 11:43:55.303714 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.309663 kubelet[1808]: E0319 11:43:55.309598 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.309663 kubelet[1808]: W0319 11:43:55.309631 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.309663 kubelet[1808]: E0319 11:43:55.309655 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.310092 kubelet[1808]: E0319 11:43:55.310063 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.310092 kubelet[1808]: W0319 11:43:55.310083 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.310232 kubelet[1808]: E0319 11:43:55.310116 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.310414 kubelet[1808]: E0319 11:43:55.310387 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.310414 kubelet[1808]: W0319 11:43:55.310405 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.310715 kubelet[1808]: E0319 11:43:55.310422 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.310715 kubelet[1808]: E0319 11:43:55.310623 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.310715 kubelet[1808]: W0319 11:43:55.310631 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.310715 kubelet[1808]: E0319 11:43:55.310650 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.310858 kubelet[1808]: E0319 11:43:55.310843 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.310858 kubelet[1808]: W0319 11:43:55.310855 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.310982 kubelet[1808]: E0319 11:43:55.310875 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.311260 kubelet[1808]: E0319 11:43:55.311224 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.311260 kubelet[1808]: W0319 11:43:55.311245 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.311260 kubelet[1808]: E0319 11:43:55.311266 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.311766 kubelet[1808]: E0319 11:43:55.311618 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.311766 kubelet[1808]: W0319 11:43:55.311638 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.311766 kubelet[1808]: E0319 11:43:55.311660 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.311978 kubelet[1808]: E0319 11:43:55.311962 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.312186 kubelet[1808]: W0319 11:43:55.312041 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.312186 kubelet[1808]: E0319 11:43:55.312072 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.312316 kubelet[1808]: E0319 11:43:55.312305 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.312382 kubelet[1808]: W0319 11:43:55.312371 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.312445 kubelet[1808]: E0319 11:43:55.312435 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.312974 kubelet[1808]: E0319 11:43:55.312946 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.312974 kubelet[1808]: W0319 11:43:55.312968 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.313084 kubelet[1808]: E0319 11:43:55.313025 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.313389 kubelet[1808]: E0319 11:43:55.313370 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.313389 kubelet[1808]: W0319 11:43:55.313388 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.313482 kubelet[1808]: E0319 11:43:55.313404 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.314038 kubelet[1808]: E0319 11:43:55.314009 1808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:43:55.314038 kubelet[1808]: W0319 11:43:55.314028 1808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:43:55.314121 kubelet[1808]: E0319 11:43:55.314045 1808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:43:55.595281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2765548767.mount: Deactivated successfully. Mar 19 11:43:55.740550 containerd[1479]: time="2025-03-19T11:43:55.739370506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:55.740550 containerd[1479]: time="2025-03-19T11:43:55.740417816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=6857253" Mar 19 11:43:55.740550 containerd[1479]: time="2025-03-19T11:43:55.740468224Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:55.743036 containerd[1479]: time="2025-03-19T11:43:55.742975680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:55.744031 containerd[1479]: time="2025-03-19T11:43:55.743989291Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 1.411518757s" Mar 19 11:43:55.744294 containerd[1479]: time="2025-03-19T11:43:55.744172896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 19 11:43:55.747650 containerd[1479]: time="2025-03-19T11:43:55.747577324Z" level=info msg="CreateContainer within sandbox \"a25745fcc1f81d7eb54ce3bab3ed976eded201e3dbc5c0c3a52b2fe3beb1ad63\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 19 11:43:55.767573 containerd[1479]: time="2025-03-19T11:43:55.766574927Z" level=info msg="CreateContainer within sandbox \"a25745fcc1f81d7eb54ce3bab3ed976eded201e3dbc5c0c3a52b2fe3beb1ad63\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5986663e43c394fa98133051720d7ef9bca8e94353fe5b391c0528311fbcb1db\"" Mar 19 11:43:55.770191 containerd[1479]: time="2025-03-19T11:43:55.770139472Z" level=info msg="StartContainer for \"5986663e43c394fa98133051720d7ef9bca8e94353fe5b391c0528311fbcb1db\"" Mar 19 11:43:55.830853 systemd[1]: Started cri-containerd-5986663e43c394fa98133051720d7ef9bca8e94353fe5b391c0528311fbcb1db.scope - libcontainer container 5986663e43c394fa98133051720d7ef9bca8e94353fe5b391c0528311fbcb1db. Mar 19 11:43:55.868073 containerd[1479]: time="2025-03-19T11:43:55.867800997Z" level=info msg="StartContainer for \"5986663e43c394fa98133051720d7ef9bca8e94353fe5b391c0528311fbcb1db\" returns successfully" Mar 19 11:43:55.885423 systemd[1]: cri-containerd-5986663e43c394fa98133051720d7ef9bca8e94353fe5b391c0528311fbcb1db.scope: Deactivated successfully. Mar 19 11:43:55.995758 containerd[1479]: time="2025-03-19T11:43:55.995594640Z" level=info msg="shim disconnected" id=5986663e43c394fa98133051720d7ef9bca8e94353fe5b391c0528311fbcb1db namespace=k8s.io Mar 19 11:43:55.995758 containerd[1479]: time="2025-03-19T11:43:55.995680850Z" level=warning msg="cleaning up after shim disconnected" id=5986663e43c394fa98133051720d7ef9bca8e94353fe5b391c0528311fbcb1db namespace=k8s.io Mar 19 11:43:55.995758 containerd[1479]: time="2025-03-19T11:43:55.995696057Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:43:56.051338 kubelet[1808]: E0319 11:43:56.051261 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:43:56.201393 kubelet[1808]: E0319 11:43:56.201173 1808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 19 11:43:56.202521 kubelet[1808]: E0319 11:43:56.202041 1808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 19 11:43:56.203404 containerd[1479]: time="2025-03-19T11:43:56.203361287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 19 11:43:56.268293 kubelet[1808]: I0319 11:43:56.268217 1808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9nz6z" podStartSLOduration=5.313018275 podStartE2EDuration="7.268199153s" podCreationTimestamp="2025-03-19 11:43:49 +0000 UTC" firstStartedPulling="2025-03-19 11:43:52.376944887 +0000 UTC m=+4.116629738" lastFinishedPulling="2025-03-19 11:43:54.332125742 +0000 UTC m=+6.071810616" observedRunningTime="2025-03-19 11:43:55.219581873 +0000 UTC m=+6.959266743" watchObservedRunningTime="2025-03-19 11:43:56.268199153 +0000 UTC m=+8.007884024" Mar 19 11:43:56.375007 systemd-resolved[1333]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Mar 19 11:43:56.537292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5986663e43c394fa98133051720d7ef9bca8e94353fe5b391c0528311fbcb1db-rootfs.mount: Deactivated successfully. Mar 19 11:43:57.051996 kubelet[1808]: E0319 11:43:57.051814 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:43:57.177318 kubelet[1808]: E0319 11:43:57.176447 1808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tkgw6" podUID="28cc52f2-b4a5-4277-864b-64eb1318d7af" Mar 19 11:43:58.052160 kubelet[1808]: E0319 11:43:58.052047 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:43:59.053144 kubelet[1808]: E0319 11:43:59.053023 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:43:59.176898 kubelet[1808]: E0319 11:43:59.176821 1808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tkgw6" podUID="28cc52f2-b4a5-4277-864b-64eb1318d7af" Mar 19 11:44:00.054208 kubelet[1808]: E0319 11:44:00.053761 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:00.134864 containerd[1479]: time="2025-03-19T11:44:00.133931407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:00.136161 containerd[1479]: time="2025-03-19T11:44:00.136100588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 19 11:44:00.138656 containerd[1479]: time="2025-03-19T11:44:00.138599479Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:00.139807 containerd[1479]: time="2025-03-19T11:44:00.139757080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:00.141260 containerd[1479]: time="2025-03-19T11:44:00.141212538Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 3.937770801s" Mar 19 11:44:00.141532 containerd[1479]: time="2025-03-19T11:44:00.141411623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 19 11:44:00.144521 containerd[1479]: time="2025-03-19T11:44:00.144455829Z" level=info msg="CreateContainer within sandbox \"a25745fcc1f81d7eb54ce3bab3ed976eded201e3dbc5c0c3a52b2fe3beb1ad63\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 19 11:44:00.166783 containerd[1479]: time="2025-03-19T11:44:00.166698380Z" level=info msg="CreateContainer within sandbox \"a25745fcc1f81d7eb54ce3bab3ed976eded201e3dbc5c0c3a52b2fe3beb1ad63\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0efd342361687c782653a9ced9d59b860352226cfeb2c94d47272ac89094a903\"" Mar 19 11:44:00.168562 containerd[1479]: time="2025-03-19T11:44:00.167710397Z" level=info msg="StartContainer for \"0efd342361687c782653a9ced9d59b860352226cfeb2c94d47272ac89094a903\"" Mar 19 11:44:00.226808 systemd[1]: Started cri-containerd-0efd342361687c782653a9ced9d59b860352226cfeb2c94d47272ac89094a903.scope - libcontainer container 0efd342361687c782653a9ced9d59b860352226cfeb2c94d47272ac89094a903. Mar 19 11:44:00.265250 containerd[1479]: time="2025-03-19T11:44:00.265132412Z" level=info msg="StartContainer for \"0efd342361687c782653a9ced9d59b860352226cfeb2c94d47272ac89094a903\" returns successfully" Mar 19 11:44:00.971292 containerd[1479]: time="2025-03-19T11:44:00.971174301Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:44:00.973934 systemd[1]: cri-containerd-0efd342361687c782653a9ced9d59b860352226cfeb2c94d47272ac89094a903.scope: Deactivated successfully. Mar 19 11:44:00.975893 systemd[1]: cri-containerd-0efd342361687c782653a9ced9d59b860352226cfeb2c94d47272ac89094a903.scope: Consumed 787ms CPU time, 171.7M memory peak, 154M written to disk. Mar 19 11:44:01.005084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0efd342361687c782653a9ced9d59b860352226cfeb2c94d47272ac89094a903-rootfs.mount: Deactivated successfully. Mar 19 11:44:01.054731 kubelet[1808]: E0319 11:44:01.054667 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:01.058031 kubelet[1808]: I0319 11:44:01.058000 1808 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 19 11:44:01.081627 containerd[1479]: time="2025-03-19T11:44:01.081483019Z" level=info msg="shim disconnected" id=0efd342361687c782653a9ced9d59b860352226cfeb2c94d47272ac89094a903 namespace=k8s.io Mar 19 11:44:01.081963 containerd[1479]: time="2025-03-19T11:44:01.081686876Z" level=warning msg="cleaning up after shim disconnected" id=0efd342361687c782653a9ced9d59b860352226cfeb2c94d47272ac89094a903 namespace=k8s.io Mar 19 11:44:01.081963 containerd[1479]: time="2025-03-19T11:44:01.081702400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:44:01.185323 systemd[1]: Created slice kubepods-besteffort-pod28cc52f2_b4a5_4277_864b_64eb1318d7af.slice - libcontainer container kubepods-besteffort-pod28cc52f2_b4a5_4277_864b_64eb1318d7af.slice. Mar 19 11:44:01.190025 containerd[1479]: time="2025-03-19T11:44:01.189970744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkgw6,Uid:28cc52f2-b4a5-4277-864b-64eb1318d7af,Namespace:calico-system,Attempt:0,}" Mar 19 11:44:01.225388 kubelet[1808]: E0319 11:44:01.224909 1808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 19 11:44:01.226435 containerd[1479]: time="2025-03-19T11:44:01.226329458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 19 11:44:01.228897 systemd-resolved[1333]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Mar 19 11:44:01.310047 containerd[1479]: time="2025-03-19T11:44:01.309872928Z" level=error msg="Failed to destroy network for sandbox \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:01.313206 containerd[1479]: time="2025-03-19T11:44:01.313077021Z" level=error msg="encountered an error cleaning up failed sandbox \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:01.313564 containerd[1479]: time="2025-03-19T11:44:01.313372883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkgw6,Uid:28cc52f2-b4a5-4277-864b-64eb1318d7af,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:01.313749 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384-shm.mount: Deactivated successfully. Mar 19 11:44:01.314831 kubelet[1808]: E0319 11:44:01.314349 1808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:01.314831 kubelet[1808]: E0319 11:44:01.314431 1808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tkgw6" Mar 19 11:44:01.314831 kubelet[1808]: E0319 11:44:01.314458 1808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tkgw6" Mar 19 11:44:01.314990 kubelet[1808]: E0319 11:44:01.314522 1808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tkgw6_calico-system(28cc52f2-b4a5-4277-864b-64eb1318d7af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tkgw6_calico-system(28cc52f2-b4a5-4277-864b-64eb1318d7af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tkgw6" podUID="28cc52f2-b4a5-4277-864b-64eb1318d7af" Mar 19 11:44:02.056209 kubelet[1808]: E0319 11:44:02.056099 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:02.229965 kubelet[1808]: I0319 11:44:02.229481 1808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384" Mar 19 11:44:02.231723 containerd[1479]: time="2025-03-19T11:44:02.230717991Z" level=info msg="StopPodSandbox for \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\"" Mar 19 11:44:02.231723 containerd[1479]: time="2025-03-19T11:44:02.231117843Z" level=info msg="Ensure that sandbox 10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384 in task-service has been cleanup successfully" Mar 19 11:44:02.233664 containerd[1479]: time="2025-03-19T11:44:02.233606211Z" level=info msg="TearDown network for sandbox \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\" successfully" Mar 19 11:44:02.233664 containerd[1479]: time="2025-03-19T11:44:02.233653262Z" level=info msg="StopPodSandbox for \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\" returns successfully" Mar 19 11:44:02.235871 containerd[1479]: time="2025-03-19T11:44:02.234601648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkgw6,Uid:28cc52f2-b4a5-4277-864b-64eb1318d7af,Namespace:calico-system,Attempt:1,}" Mar 19 11:44:02.237419 systemd[1]: run-netns-cni\x2d61bb458a\x2d76d4\x2da518\x2dae35\x2d1ea5c83d9cb4.mount: Deactivated successfully. Mar 19 11:44:02.405879 containerd[1479]: time="2025-03-19T11:44:02.405704965Z" level=error msg="Failed to destroy network for sandbox \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:02.412383 containerd[1479]: time="2025-03-19T11:44:02.407973560Z" level=error msg="encountered an error cleaning up failed sandbox \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:02.412383 containerd[1479]: time="2025-03-19T11:44:02.408190179Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkgw6,Uid:28cc52f2-b4a5-4277-864b-64eb1318d7af,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:02.411424 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7-shm.mount: Deactivated successfully. Mar 19 11:44:02.413314 kubelet[1808]: E0319 11:44:02.409697 1808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:02.413314 kubelet[1808]: E0319 11:44:02.409787 1808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tkgw6" Mar 19 11:44:02.413314 kubelet[1808]: E0319 11:44:02.409819 1808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tkgw6" Mar 19 11:44:02.413528 kubelet[1808]: E0319 11:44:02.409873 1808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tkgw6_calico-system(28cc52f2-b4a5-4277-864b-64eb1318d7af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tkgw6_calico-system(28cc52f2-b4a5-4277-864b-64eb1318d7af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tkgw6" podUID="28cc52f2-b4a5-4277-864b-64eb1318d7af" Mar 19 11:44:02.646403 kubelet[1808]: W0319 11:44:02.646296 1808 reflector.go:569] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:146.190.145.41" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '146.190.145.41' and this object Mar 19 11:44:02.646403 kubelet[1808]: I0319 11:44:02.646323 1808 status_manager.go:890] "Failed to get status for pod" podUID="ac7589c1-2b13-4e45-8172-5c4933a8cd33" pod="default/nginx-deployment-7fcdb87857-q2mnk" err="pods \"nginx-deployment-7fcdb87857-q2mnk\" is forbidden: User \"system:node:146.190.145.41\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node '146.190.145.41' and this object" Mar 19 11:44:02.646403 kubelet[1808]: E0319 11:44:02.646341 1808 reflector.go:166] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:146.190.145.41\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node '146.190.145.41' and this object" logger="UnhandledError" Mar 19 11:44:02.651796 systemd[1]: Created slice kubepods-besteffort-podac7589c1_2b13_4e45_8172_5c4933a8cd33.slice - libcontainer container kubepods-besteffort-podac7589c1_2b13_4e45_8172_5c4933a8cd33.slice. Mar 19 11:44:02.689470 kubelet[1808]: I0319 11:44:02.689231 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvnln\" (UniqueName: \"kubernetes.io/projected/ac7589c1-2b13-4e45-8172-5c4933a8cd33-kube-api-access-wvnln\") pod \"nginx-deployment-7fcdb87857-q2mnk\" (UID: \"ac7589c1-2b13-4e45-8172-5c4933a8cd33\") " pod="default/nginx-deployment-7fcdb87857-q2mnk" Mar 19 11:44:03.057012 kubelet[1808]: E0319 11:44:03.056950 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:03.234519 kubelet[1808]: I0319 11:44:03.233175 1808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7" Mar 19 11:44:03.234705 containerd[1479]: time="2025-03-19T11:44:03.233956180Z" level=info msg="StopPodSandbox for \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\"" Mar 19 11:44:03.234705 containerd[1479]: time="2025-03-19T11:44:03.234221449Z" level=info msg="Ensure that sandbox b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7 in task-service has been cleanup successfully" Mar 19 11:44:03.236712 containerd[1479]: time="2025-03-19T11:44:03.236656959Z" level=info msg="TearDown network for sandbox \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\" successfully" Mar 19 11:44:03.236712 containerd[1479]: time="2025-03-19T11:44:03.236700317Z" level=info msg="StopPodSandbox for \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\" returns successfully" Mar 19 11:44:03.238392 containerd[1479]: time="2025-03-19T11:44:03.237347326Z" level=info msg="StopPodSandbox for \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\"" Mar 19 11:44:03.238392 containerd[1479]: time="2025-03-19T11:44:03.237467611Z" level=info msg="TearDown network for sandbox \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\" successfully" Mar 19 11:44:03.238392 containerd[1479]: time="2025-03-19T11:44:03.237479636Z" level=info msg="StopPodSandbox for \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\" returns successfully" Mar 19 11:44:03.238129 systemd[1]: run-netns-cni\x2dae167fd9\x2dd1a8\x2d2325\x2d3807\x2d4978fa7502cc.mount: Deactivated successfully. Mar 19 11:44:03.240474 containerd[1479]: time="2025-03-19T11:44:03.240433005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkgw6,Uid:28cc52f2-b4a5-4277-864b-64eb1318d7af,Namespace:calico-system,Attempt:2,}" Mar 19 11:44:03.437394 containerd[1479]: time="2025-03-19T11:44:03.437230159Z" level=error msg="Failed to destroy network for sandbox \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:03.438233 containerd[1479]: time="2025-03-19T11:44:03.438168324Z" level=error msg="encountered an error cleaning up failed sandbox \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:03.438345 containerd[1479]: time="2025-03-19T11:44:03.438248309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkgw6,Uid:28cc52f2-b4a5-4277-864b-64eb1318d7af,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:03.439729 kubelet[1808]: E0319 11:44:03.439678 1808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:03.439856 kubelet[1808]: E0319 11:44:03.439747 1808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tkgw6" Mar 19 11:44:03.439856 kubelet[1808]: E0319 11:44:03.439769 1808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tkgw6" Mar 19 11:44:03.439856 kubelet[1808]: E0319 11:44:03.439811 1808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tkgw6_calico-system(28cc52f2-b4a5-4277-864b-64eb1318d7af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tkgw6_calico-system(28cc52f2-b4a5-4277-864b-64eb1318d7af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tkgw6" podUID="28cc52f2-b4a5-4277-864b-64eb1318d7af" Mar 19 11:44:03.441557 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91-shm.mount: Deactivated successfully. Mar 19 11:44:03.800847 kubelet[1808]: E0319 11:44:03.800756 1808 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 19 11:44:03.800847 kubelet[1808]: E0319 11:44:03.800809 1808 projected.go:194] Error preparing data for projected volume kube-api-access-wvnln for pod default/nginx-deployment-7fcdb87857-q2mnk: failed to sync configmap cache: timed out waiting for the condition Mar 19 11:44:03.801420 kubelet[1808]: E0319 11:44:03.800911 1808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ac7589c1-2b13-4e45-8172-5c4933a8cd33-kube-api-access-wvnln podName:ac7589c1-2b13-4e45-8172-5c4933a8cd33 nodeName:}" failed. No retries permitted until 2025-03-19 11:44:04.300886399 +0000 UTC m=+16.040571264 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wvnln" (UniqueName: "kubernetes.io/projected/ac7589c1-2b13-4e45-8172-5c4933a8cd33-kube-api-access-wvnln") pod "nginx-deployment-7fcdb87857-q2mnk" (UID: "ac7589c1-2b13-4e45-8172-5c4933a8cd33") : failed to sync configmap cache: timed out waiting for the condition Mar 19 11:44:04.057959 kubelet[1808]: E0319 11:44:04.057794 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:04.240906 kubelet[1808]: I0319 11:44:04.240849 1808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91" Mar 19 11:44:04.242161 containerd[1479]: time="2025-03-19T11:44:04.241849612Z" level=info msg="StopPodSandbox for \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\"" Mar 19 11:44:04.242161 containerd[1479]: time="2025-03-19T11:44:04.242141806Z" level=info msg="Ensure that sandbox f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91 in task-service has been cleanup successfully" Mar 19 11:44:04.247929 containerd[1479]: time="2025-03-19T11:44:04.246649947Z" level=info msg="TearDown network for sandbox \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\" successfully" Mar 19 11:44:04.247929 containerd[1479]: time="2025-03-19T11:44:04.246705222Z" level=info msg="StopPodSandbox for \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\" returns successfully" Mar 19 11:44:04.247097 systemd[1]: run-netns-cni\x2db46a0b63\x2d7b8b\x2dd785\x2defdc\x2d17f5ba6bdd9b.mount: Deactivated successfully. Mar 19 11:44:04.249010 containerd[1479]: time="2025-03-19T11:44:04.248822465Z" level=info msg="StopPodSandbox for \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\"" Mar 19 11:44:04.250839 containerd[1479]: time="2025-03-19T11:44:04.250642034Z" level=info msg="TearDown network for sandbox \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\" successfully" Mar 19 11:44:04.250839 containerd[1479]: time="2025-03-19T11:44:04.250687786Z" level=info msg="StopPodSandbox for \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\" returns successfully" Mar 19 11:44:04.252102 containerd[1479]: time="2025-03-19T11:44:04.252060949Z" level=info msg="StopPodSandbox for \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\"" Mar 19 11:44:04.252943 containerd[1479]: time="2025-03-19T11:44:04.252351033Z" level=info msg="TearDown network for sandbox \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\" successfully" Mar 19 11:44:04.252943 containerd[1479]: time="2025-03-19T11:44:04.252381108Z" level=info msg="StopPodSandbox for \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\" returns successfully" Mar 19 11:44:04.253829 containerd[1479]: time="2025-03-19T11:44:04.253316528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkgw6,Uid:28cc52f2-b4a5-4277-864b-64eb1318d7af,Namespace:calico-system,Attempt:3,}" Mar 19 11:44:04.396911 containerd[1479]: time="2025-03-19T11:44:04.396613216Z" level=error msg="Failed to destroy network for sandbox \"9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:04.400712 containerd[1479]: time="2025-03-19T11:44:04.400376962Z" level=error msg="encountered an error cleaning up failed sandbox \"9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:04.400712 containerd[1479]: time="2025-03-19T11:44:04.400523878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkgw6,Uid:28cc52f2-b4a5-4277-864b-64eb1318d7af,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:04.401252 kubelet[1808]: E0319 11:44:04.401202 1808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:04.401732 kubelet[1808]: E0319 11:44:04.401466 1808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tkgw6" Mar 19 11:44:04.401732 kubelet[1808]: E0319 11:44:04.401551 1808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tkgw6" Mar 19 11:44:04.401732 kubelet[1808]: E0319 11:44:04.401651 1808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tkgw6_calico-system(28cc52f2-b4a5-4277-864b-64eb1318d7af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tkgw6_calico-system(28cc52f2-b4a5-4277-864b-64eb1318d7af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tkgw6" podUID="28cc52f2-b4a5-4277-864b-64eb1318d7af" Mar 19 11:44:04.401688 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c-shm.mount: Deactivated successfully. Mar 19 11:44:04.457270 containerd[1479]: time="2025-03-19T11:44:04.456410027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-q2mnk,Uid:ac7589c1-2b13-4e45-8172-5c4933a8cd33,Namespace:default,Attempt:0,}" Mar 19 11:44:04.587616 containerd[1479]: time="2025-03-19T11:44:04.587548247Z" level=error msg="Failed to destroy network for sandbox \"1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:04.588843 containerd[1479]: time="2025-03-19T11:44:04.588782432Z" level=error msg="encountered an error cleaning up failed sandbox \"1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:04.589200 containerd[1479]: time="2025-03-19T11:44:04.589058912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-q2mnk,Uid:ac7589c1-2b13-4e45-8172-5c4933a8cd33,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:04.590047 kubelet[1808]: E0319 11:44:04.589526 1808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:04.590047 kubelet[1808]: E0319 11:44:04.589654 1808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-q2mnk" Mar 19 11:44:04.590047 kubelet[1808]: E0319 11:44:04.589687 1808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-q2mnk" Mar 19 11:44:04.590248 kubelet[1808]: E0319 11:44:04.589741 1808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-q2mnk_default(ac7589c1-2b13-4e45-8172-5c4933a8cd33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-q2mnk_default(ac7589c1-2b13-4e45-8172-5c4933a8cd33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-q2mnk" podUID="ac7589c1-2b13-4e45-8172-5c4933a8cd33" Mar 19 11:44:05.058618 kubelet[1808]: E0319 11:44:05.058567 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:05.250440 kubelet[1808]: I0319 11:44:05.250126 1808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c" Mar 19 11:44:05.251863 containerd[1479]: time="2025-03-19T11:44:05.251555119Z" level=info msg="StopPodSandbox for \"9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c\"" Mar 19 11:44:05.253187 containerd[1479]: time="2025-03-19T11:44:05.252793105Z" level=info msg="Ensure that sandbox 9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c in task-service has been cleanup successfully" Mar 19 11:44:05.254656 containerd[1479]: time="2025-03-19T11:44:05.253985799Z" level=info msg="TearDown network for sandbox \"9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c\" successfully" Mar 19 11:44:05.254656 containerd[1479]: time="2025-03-19T11:44:05.254022828Z" level=info msg="StopPodSandbox for \"9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c\" returns successfully" Mar 19 11:44:05.256964 containerd[1479]: time="2025-03-19T11:44:05.256917485Z" level=info msg="StopPodSandbox for \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\"" Mar 19 11:44:05.257089 containerd[1479]: time="2025-03-19T11:44:05.257065866Z" level=info msg="TearDown network for sandbox \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\" successfully" Mar 19 11:44:05.257134 containerd[1479]: time="2025-03-19T11:44:05.257089430Z" level=info msg="StopPodSandbox for \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\" returns successfully" Mar 19 11:44:05.257740 kubelet[1808]: I0319 11:44:05.257604 1808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880" Mar 19 11:44:05.258651 systemd[1]: run-netns-cni\x2dd910f57c\x2d26c0\x2dd507\x2d47d8\x2dcadcb67cc1b9.mount: Deactivated successfully. Mar 19 11:44:05.261170 containerd[1479]: time="2025-03-19T11:44:05.259099021Z" level=info msg="StopPodSandbox for \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\"" Mar 19 11:44:05.261170 containerd[1479]: time="2025-03-19T11:44:05.259234422Z" level=info msg="TearDown network for sandbox \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\" successfully" Mar 19 11:44:05.261170 containerd[1479]: time="2025-03-19T11:44:05.259299747Z" level=info msg="StopPodSandbox for \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\" returns successfully" Mar 19 11:44:05.261170 containerd[1479]: time="2025-03-19T11:44:05.259859269Z" level=info msg="StopPodSandbox for \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\"" Mar 19 11:44:05.261170 containerd[1479]: time="2025-03-19T11:44:05.259882636Z" level=info msg="StopPodSandbox for \"1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880\"" Mar 19 11:44:05.261170 containerd[1479]: time="2025-03-19T11:44:05.259973443Z" level=info msg="TearDown network for sandbox \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\" successfully" Mar 19 11:44:05.261170 containerd[1479]: time="2025-03-19T11:44:05.259987870Z" level=info msg="StopPodSandbox for \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\" returns successfully" Mar 19 11:44:05.261170 containerd[1479]: time="2025-03-19T11:44:05.260073137Z" level=info msg="Ensure that sandbox 1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880 in task-service has been cleanup successfully" Mar 19 11:44:05.263977 containerd[1479]: time="2025-03-19T11:44:05.263935759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkgw6,Uid:28cc52f2-b4a5-4277-864b-64eb1318d7af,Namespace:calico-system,Attempt:4,}" Mar 19 11:44:05.264591 containerd[1479]: time="2025-03-19T11:44:05.264441668Z" level=info msg="TearDown network for sandbox \"1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880\" successfully" Mar 19 11:44:05.265627 containerd[1479]: time="2025-03-19T11:44:05.265590118Z" level=info msg="StopPodSandbox for \"1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880\" returns successfully" Mar 19 11:44:05.266033 systemd[1]: run-netns-cni\x2d29ab1e3c\x2dbd3c\x2d6b3f\x2d0294\x2da49c792c0775.mount: Deactivated successfully. Mar 19 11:44:05.266483 containerd[1479]: time="2025-03-19T11:44:05.266375118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-q2mnk,Uid:ac7589c1-2b13-4e45-8172-5c4933a8cd33,Namespace:default,Attempt:1,}" Mar 19 11:44:05.447156 containerd[1479]: time="2025-03-19T11:44:05.446059399Z" level=error msg="Failed to destroy network for sandbox \"25dbbbcd58c09b05275e55efd3b9f8340ed93b7a78f177719839fd1031dc9d3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:05.449347 containerd[1479]: time="2025-03-19T11:44:05.449167940Z" level=error msg="encountered an error cleaning up failed sandbox \"25dbbbcd58c09b05275e55efd3b9f8340ed93b7a78f177719839fd1031dc9d3f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:05.450252 containerd[1479]: time="2025-03-19T11:44:05.449991438Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkgw6,Uid:28cc52f2-b4a5-4277-864b-64eb1318d7af,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"25dbbbcd58c09b05275e55efd3b9f8340ed93b7a78f177719839fd1031dc9d3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:05.451358 kubelet[1808]: E0319 11:44:05.450761 1808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25dbbbcd58c09b05275e55efd3b9f8340ed93b7a78f177719839fd1031dc9d3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:05.451358 kubelet[1808]: E0319 11:44:05.450833 1808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25dbbbcd58c09b05275e55efd3b9f8340ed93b7a78f177719839fd1031dc9d3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tkgw6" Mar 19 11:44:05.451358 kubelet[1808]: E0319 11:44:05.450860 1808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25dbbbcd58c09b05275e55efd3b9f8340ed93b7a78f177719839fd1031dc9d3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tkgw6" Mar 19 11:44:05.451658 kubelet[1808]: E0319 11:44:05.450918 1808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tkgw6_calico-system(28cc52f2-b4a5-4277-864b-64eb1318d7af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tkgw6_calico-system(28cc52f2-b4a5-4277-864b-64eb1318d7af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25dbbbcd58c09b05275e55efd3b9f8340ed93b7a78f177719839fd1031dc9d3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tkgw6" podUID="28cc52f2-b4a5-4277-864b-64eb1318d7af" Mar 19 11:44:05.465188 containerd[1479]: time="2025-03-19T11:44:05.465128821Z" level=error msg="Failed to destroy network for sandbox \"7c85e7ca31dd7214185aad2714194264695b2f727cb149eea2e81a13027e7494\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:05.466126 containerd[1479]: time="2025-03-19T11:44:05.466076460Z" level=error msg="encountered an error cleaning up failed sandbox \"7c85e7ca31dd7214185aad2714194264695b2f727cb149eea2e81a13027e7494\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:05.467783 containerd[1479]: time="2025-03-19T11:44:05.467475233Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-q2mnk,Uid:ac7589c1-2b13-4e45-8172-5c4933a8cd33,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7c85e7ca31dd7214185aad2714194264695b2f727cb149eea2e81a13027e7494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:05.468450 kubelet[1808]: E0319 11:44:05.468229 1808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c85e7ca31dd7214185aad2714194264695b2f727cb149eea2e81a13027e7494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:05.468450 kubelet[1808]: E0319 11:44:05.468410 1808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c85e7ca31dd7214185aad2714194264695b2f727cb149eea2e81a13027e7494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-q2mnk" Mar 19 11:44:05.468450 kubelet[1808]: E0319 11:44:05.468440 1808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c85e7ca31dd7214185aad2714194264695b2f727cb149eea2e81a13027e7494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-q2mnk" Mar 19 11:44:05.469040 kubelet[1808]: E0319 11:44:05.468704 1808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-q2mnk_default(ac7589c1-2b13-4e45-8172-5c4933a8cd33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-q2mnk_default(ac7589c1-2b13-4e45-8172-5c4933a8cd33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c85e7ca31dd7214185aad2714194264695b2f727cb149eea2e81a13027e7494\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-q2mnk" podUID="ac7589c1-2b13-4e45-8172-5c4933a8cd33" Mar 19 11:44:06.059678 kubelet[1808]: E0319 11:44:06.059623 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:06.245289 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c85e7ca31dd7214185aad2714194264695b2f727cb149eea2e81a13027e7494-shm.mount: Deactivated successfully. Mar 19 11:44:06.245421 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25dbbbcd58c09b05275e55efd3b9f8340ed93b7a78f177719839fd1031dc9d3f-shm.mount: Deactivated successfully. Mar 19 11:44:06.262307 kubelet[1808]: I0319 11:44:06.261549 1808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25dbbbcd58c09b05275e55efd3b9f8340ed93b7a78f177719839fd1031dc9d3f" Mar 19 11:44:06.262463 containerd[1479]: time="2025-03-19T11:44:06.262388157Z" level=info msg="StopPodSandbox for \"25dbbbcd58c09b05275e55efd3b9f8340ed93b7a78f177719839fd1031dc9d3f\"" Mar 19 11:44:06.265551 containerd[1479]: time="2025-03-19T11:44:06.263205239Z" level=info msg="Ensure that sandbox 25dbbbcd58c09b05275e55efd3b9f8340ed93b7a78f177719839fd1031dc9d3f in task-service has been cleanup successfully" Mar 19 11:44:06.265444 systemd[1]: run-netns-cni\x2dc8787f98\x2d4a4f\x2de374\x2dfb49\x2d48c5449cc319.mount: Deactivated successfully. Mar 19 11:44:06.269760 containerd[1479]: time="2025-03-19T11:44:06.269581432Z" level=info msg="TearDown network for sandbox \"25dbbbcd58c09b05275e55efd3b9f8340ed93b7a78f177719839fd1031dc9d3f\" successfully" Mar 19 11:44:06.269760 containerd[1479]: time="2025-03-19T11:44:06.269622211Z" level=info msg="StopPodSandbox for \"25dbbbcd58c09b05275e55efd3b9f8340ed93b7a78f177719839fd1031dc9d3f\" returns successfully" Mar 19 11:44:06.270390 containerd[1479]: time="2025-03-19T11:44:06.270347473Z" level=info msg="StopPodSandbox for \"9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c\"" Mar 19 11:44:06.270538 containerd[1479]: time="2025-03-19T11:44:06.270474300Z" level=info msg="TearDown network for sandbox \"9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c\" successfully" Mar 19 11:44:06.270589 containerd[1479]: time="2025-03-19T11:44:06.270534646Z" level=info msg="StopPodSandbox for \"9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c\" returns successfully" Mar 19 11:44:06.271027 kubelet[1808]: I0319 11:44:06.270740 1808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c85e7ca31dd7214185aad2714194264695b2f727cb149eea2e81a13027e7494" Mar 19 11:44:06.271126 containerd[1479]: time="2025-03-19T11:44:06.270950138Z" level=info msg="StopPodSandbox for \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\"" Mar 19 11:44:06.271126 containerd[1479]: time="2025-03-19T11:44:06.271054995Z" level=info msg="TearDown network for sandbox \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\" successfully" Mar 19 11:44:06.271126 containerd[1479]: time="2025-03-19T11:44:06.271069441Z" level=info msg="StopPodSandbox for \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\" returns successfully" Mar 19 11:44:06.271535 containerd[1479]: time="2025-03-19T11:44:06.271508662Z" level=info msg="StopPodSandbox for \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\"" Mar 19 11:44:06.271903 containerd[1479]: time="2025-03-19T11:44:06.271603985Z" level=info msg="TearDown network for sandbox \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\" successfully" Mar 19 11:44:06.271903 containerd[1479]: time="2025-03-19T11:44:06.271619223Z" level=info msg="StopPodSandbox for \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\" returns successfully" Mar 19 11:44:06.271903 containerd[1479]: time="2025-03-19T11:44:06.271796513Z" level=info msg="StopPodSandbox for \"7c85e7ca31dd7214185aad2714194264695b2f727cb149eea2e81a13027e7494\"" Mar 19 11:44:06.272715 containerd[1479]: time="2025-03-19T11:44:06.272662257Z" level=info msg="Ensure that sandbox 7c85e7ca31dd7214185aad2714194264695b2f727cb149eea2e81a13027e7494 in task-service has been cleanup successfully" Mar 19 11:44:06.275316 containerd[1479]: time="2025-03-19T11:44:06.275061320Z" level=info msg="StopPodSandbox for \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\"" Mar 19 11:44:06.275316 containerd[1479]: time="2025-03-19T11:44:06.275191271Z" level=info msg="TearDown network for sandbox \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\" successfully" Mar 19 11:44:06.275316 containerd[1479]: time="2025-03-19T11:44:06.275205748Z" level=info msg="StopPodSandbox for \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\" returns successfully" Mar 19 11:44:06.276314 systemd[1]: run-netns-cni\x2d86c8bb3c\x2d323e\x2d2d72\x2dfae6\x2d08ca0d6a8d8f.mount: Deactivated successfully. Mar 19 11:44:06.277700 containerd[1479]: time="2025-03-19T11:44:06.277164242Z" level=info msg="TearDown network for sandbox \"7c85e7ca31dd7214185aad2714194264695b2f727cb149eea2e81a13027e7494\" successfully" Mar 19 11:44:06.277700 containerd[1479]: time="2025-03-19T11:44:06.277204206Z" level=info msg="StopPodSandbox for \"7c85e7ca31dd7214185aad2714194264695b2f727cb149eea2e81a13027e7494\" returns successfully" Mar 19 11:44:06.277700 containerd[1479]: time="2025-03-19T11:44:06.277523955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkgw6,Uid:28cc52f2-b4a5-4277-864b-64eb1318d7af,Namespace:calico-system,Attempt:5,}" Mar 19 11:44:06.280006 containerd[1479]: time="2025-03-19T11:44:06.279873515Z" level=info msg="StopPodSandbox for \"1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880\"" Mar 19 11:44:06.280006 containerd[1479]: time="2025-03-19T11:44:06.280005052Z" level=info msg="TearDown network for sandbox \"1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880\" successfully" Mar 19 11:44:06.280208 containerd[1479]: time="2025-03-19T11:44:06.280022573Z" level=info msg="StopPodSandbox for \"1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880\" returns successfully" Mar 19 11:44:06.281773 containerd[1479]: time="2025-03-19T11:44:06.281572677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-q2mnk,Uid:ac7589c1-2b13-4e45-8172-5c4933a8cd33,Namespace:default,Attempt:2,}" Mar 19 11:44:06.478633 containerd[1479]: time="2025-03-19T11:44:06.478108194Z" level=error msg="Failed to destroy network for sandbox \"b4760979dae1813fbe51bca467607d092abf17e2aa2b867fa38a4245081f771b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:06.480130 containerd[1479]: time="2025-03-19T11:44:06.479933492Z" level=error msg="encountered an error cleaning up failed sandbox \"b4760979dae1813fbe51bca467607d092abf17e2aa2b867fa38a4245081f771b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:06.480130 containerd[1479]: time="2025-03-19T11:44:06.480013049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkgw6,Uid:28cc52f2-b4a5-4277-864b-64eb1318d7af,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"b4760979dae1813fbe51bca467607d092abf17e2aa2b867fa38a4245081f771b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:06.480605 kubelet[1808]: E0319 11:44:06.480519 1808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4760979dae1813fbe51bca467607d092abf17e2aa2b867fa38a4245081f771b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:06.481183 kubelet[1808]: E0319 11:44:06.480626 1808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4760979dae1813fbe51bca467607d092abf17e2aa2b867fa38a4245081f771b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tkgw6" Mar 19 11:44:06.481183 kubelet[1808]: E0319 11:44:06.480660 1808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4760979dae1813fbe51bca467607d092abf17e2aa2b867fa38a4245081f771b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tkgw6" Mar 19 11:44:06.481183 kubelet[1808]: E0319 11:44:06.480744 1808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tkgw6_calico-system(28cc52f2-b4a5-4277-864b-64eb1318d7af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tkgw6_calico-system(28cc52f2-b4a5-4277-864b-64eb1318d7af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4760979dae1813fbe51bca467607d092abf17e2aa2b867fa38a4245081f771b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tkgw6" podUID="28cc52f2-b4a5-4277-864b-64eb1318d7af" Mar 19 11:44:06.505898 containerd[1479]: time="2025-03-19T11:44:06.505725197Z" level=error msg="Failed to destroy network for sandbox \"22082988529ac4c73e91ebf0cbe337363b06544b378aff616d48e9e06a66d66a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:06.506331 containerd[1479]: time="2025-03-19T11:44:06.506165454Z" level=error msg="encountered an error cleaning up failed sandbox \"22082988529ac4c73e91ebf0cbe337363b06544b378aff616d48e9e06a66d66a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:06.506331 containerd[1479]: time="2025-03-19T11:44:06.506254020Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-q2mnk,Uid:ac7589c1-2b13-4e45-8172-5c4933a8cd33,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"22082988529ac4c73e91ebf0cbe337363b06544b378aff616d48e9e06a66d66a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:06.506853 kubelet[1808]: E0319 11:44:06.506763 1808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22082988529ac4c73e91ebf0cbe337363b06544b378aff616d48e9e06a66d66a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:06.506853 kubelet[1808]: E0319 11:44:06.506836 1808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22082988529ac4c73e91ebf0cbe337363b06544b378aff616d48e9e06a66d66a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-q2mnk" Mar 19 11:44:06.506853 kubelet[1808]: E0319 11:44:06.506856 1808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22082988529ac4c73e91ebf0cbe337363b06544b378aff616d48e9e06a66d66a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-q2mnk" Mar 19 11:44:06.507557 kubelet[1808]: E0319 11:44:06.506908 1808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-q2mnk_default(ac7589c1-2b13-4e45-8172-5c4933a8cd33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-q2mnk_default(ac7589c1-2b13-4e45-8172-5c4933a8cd33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22082988529ac4c73e91ebf0cbe337363b06544b378aff616d48e9e06a66d66a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-q2mnk" podUID="ac7589c1-2b13-4e45-8172-5c4933a8cd33" Mar 19 11:44:07.061545 kubelet[1808]: E0319 11:44:07.061471 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:07.249222 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b4760979dae1813fbe51bca467607d092abf17e2aa2b867fa38a4245081f771b-shm.mount: Deactivated successfully. Mar 19 11:44:07.288014 kubelet[1808]: I0319 11:44:07.287111 1808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4760979dae1813fbe51bca467607d092abf17e2aa2b867fa38a4245081f771b" Mar 19 11:44:07.289333 containerd[1479]: time="2025-03-19T11:44:07.289281186Z" level=info msg="StopPodSandbox for \"b4760979dae1813fbe51bca467607d092abf17e2aa2b867fa38a4245081f771b\"" Mar 19 11:44:07.291095 containerd[1479]: time="2025-03-19T11:44:07.290609866Z" level=info msg="Ensure that sandbox b4760979dae1813fbe51bca467607d092abf17e2aa2b867fa38a4245081f771b in task-service has been cleanup successfully" Mar 19 11:44:07.293650 containerd[1479]: time="2025-03-19T11:44:07.291308132Z" level=info msg="TearDown network for sandbox \"b4760979dae1813fbe51bca467607d092abf17e2aa2b867fa38a4245081f771b\" successfully" Mar 19 11:44:07.293650 containerd[1479]: time="2025-03-19T11:44:07.291341965Z" level=info msg="StopPodSandbox for \"b4760979dae1813fbe51bca467607d092abf17e2aa2b867fa38a4245081f771b\" returns successfully" Mar 19 11:44:07.294815 systemd[1]: run-netns-cni\x2d65d3cbec\x2d3c3d\x2d772c\x2d1e5d\x2d5db889cbf93f.mount: Deactivated successfully. Mar 19 11:44:07.297529 containerd[1479]: time="2025-03-19T11:44:07.296416925Z" level=info msg="StopPodSandbox for \"25dbbbcd58c09b05275e55efd3b9f8340ed93b7a78f177719839fd1031dc9d3f\"" Mar 19 11:44:07.297529 containerd[1479]: time="2025-03-19T11:44:07.296537913Z" level=info msg="TearDown network for sandbox \"25dbbbcd58c09b05275e55efd3b9f8340ed93b7a78f177719839fd1031dc9d3f\" successfully" Mar 19 11:44:07.297529 containerd[1479]: time="2025-03-19T11:44:07.296629032Z" level=info msg="StopPodSandbox for \"25dbbbcd58c09b05275e55efd3b9f8340ed93b7a78f177719839fd1031dc9d3f\" returns successfully" Mar 19 11:44:07.300240 containerd[1479]: time="2025-03-19T11:44:07.299950781Z" level=info msg="StopPodSandbox for \"9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c\"" Mar 19 11:44:07.300240 containerd[1479]: time="2025-03-19T11:44:07.300084670Z" level=info msg="TearDown network for sandbox \"9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c\" successfully" Mar 19 11:44:07.300240 containerd[1479]: time="2025-03-19T11:44:07.300102303Z" level=info msg="StopPodSandbox for \"9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c\" returns successfully" Mar 19 11:44:07.304602 containerd[1479]: time="2025-03-19T11:44:07.302959772Z" level=info msg="StopPodSandbox for \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\"" Mar 19 11:44:07.304602 containerd[1479]: time="2025-03-19T11:44:07.303079328Z" level=info msg="TearDown network for sandbox \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\" successfully" Mar 19 11:44:07.304602 containerd[1479]: time="2025-03-19T11:44:07.303094676Z" level=info msg="StopPodSandbox for \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\" returns successfully" Mar 19 11:44:07.304837 kubelet[1808]: I0319 11:44:07.303763 1808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22082988529ac4c73e91ebf0cbe337363b06544b378aff616d48e9e06a66d66a" Mar 19 11:44:07.306124 containerd[1479]: time="2025-03-19T11:44:07.306069529Z" level=info msg="StopPodSandbox for \"22082988529ac4c73e91ebf0cbe337363b06544b378aff616d48e9e06a66d66a\"" Mar 19 11:44:07.306381 containerd[1479]: time="2025-03-19T11:44:07.306354126Z" level=info msg="Ensure that sandbox 22082988529ac4c73e91ebf0cbe337363b06544b378aff616d48e9e06a66d66a in task-service has been cleanup successfully" Mar 19 11:44:07.309458 containerd[1479]: time="2025-03-19T11:44:07.309393693Z" level=info msg="StopPodSandbox for \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\"" Mar 19 11:44:07.310478 systemd[1]: run-netns-cni\x2dd8ccbdb5\x2d6b75\x2de803\x2dda94\x2d02f712e7c496.mount: Deactivated successfully. Mar 19 11:44:07.312792 containerd[1479]: time="2025-03-19T11:44:07.311641315Z" level=info msg="TearDown network for sandbox \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\" successfully" Mar 19 11:44:07.312792 containerd[1479]: time="2025-03-19T11:44:07.311678556Z" level=info msg="StopPodSandbox for \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\" returns successfully" Mar 19 11:44:07.314310 containerd[1479]: time="2025-03-19T11:44:07.313880626Z" level=info msg="TearDown network for sandbox \"22082988529ac4c73e91ebf0cbe337363b06544b378aff616d48e9e06a66d66a\" successfully" Mar 19 11:44:07.314310 containerd[1479]: time="2025-03-19T11:44:07.313925548Z" level=info msg="StopPodSandbox for \"22082988529ac4c73e91ebf0cbe337363b06544b378aff616d48e9e06a66d66a\" returns successfully" Mar 19 11:44:07.316263 containerd[1479]: time="2025-03-19T11:44:07.316163093Z" level=info msg="StopPodSandbox for \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\"" Mar 19 11:44:07.316401 containerd[1479]: time="2025-03-19T11:44:07.316322008Z" level=info msg="TearDown network for sandbox \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\" successfully" Mar 19 11:44:07.316401 containerd[1479]: time="2025-03-19T11:44:07.316342633Z" level=info msg="StopPodSandbox for \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\" returns successfully" Mar 19 11:44:07.316482 containerd[1479]: time="2025-03-19T11:44:07.316455880Z" level=info msg="StopPodSandbox for \"7c85e7ca31dd7214185aad2714194264695b2f727cb149eea2e81a13027e7494\"" Mar 19 11:44:07.316631 containerd[1479]: time="2025-03-19T11:44:07.316582848Z" level=info msg="TearDown network for sandbox \"7c85e7ca31dd7214185aad2714194264695b2f727cb149eea2e81a13027e7494\" successfully" Mar 19 11:44:07.316631 containerd[1479]: time="2025-03-19T11:44:07.316604363Z" level=info msg="StopPodSandbox for \"7c85e7ca31dd7214185aad2714194264695b2f727cb149eea2e81a13027e7494\" returns successfully" Mar 19 11:44:07.318515 containerd[1479]: time="2025-03-19T11:44:07.318304192Z" level=info msg="StopPodSandbox for \"1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880\"" Mar 19 11:44:07.318515 containerd[1479]: time="2025-03-19T11:44:07.318330317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkgw6,Uid:28cc52f2-b4a5-4277-864b-64eb1318d7af,Namespace:calico-system,Attempt:6,}" Mar 19 11:44:07.318515 containerd[1479]: time="2025-03-19T11:44:07.318416437Z" level=info msg="TearDown network for sandbox \"1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880\" successfully" Mar 19 11:44:07.318515 containerd[1479]: time="2025-03-19T11:44:07.318428855Z" level=info msg="StopPodSandbox for \"1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880\" returns successfully" Mar 19 11:44:07.319776 containerd[1479]: time="2025-03-19T11:44:07.319583059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-q2mnk,Uid:ac7589c1-2b13-4e45-8172-5c4933a8cd33,Namespace:default,Attempt:3,}" Mar 19 11:44:07.540717 containerd[1479]: time="2025-03-19T11:44:07.540470042Z" level=error msg="Failed to destroy network for sandbox \"07fa1d1f833bd0839e8e66cab398055ac3327733151eb9a6c6db82084f1833bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:07.542302 containerd[1479]: time="2025-03-19T11:44:07.542059280Z" level=error msg="encountered an error cleaning up failed sandbox \"07fa1d1f833bd0839e8e66cab398055ac3327733151eb9a6c6db82084f1833bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:07.542302 containerd[1479]: time="2025-03-19T11:44:07.542168476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-q2mnk,Uid:ac7589c1-2b13-4e45-8172-5c4933a8cd33,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"07fa1d1f833bd0839e8e66cab398055ac3327733151eb9a6c6db82084f1833bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:07.542821 kubelet[1808]: E0319 11:44:07.542427 1808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07fa1d1f833bd0839e8e66cab398055ac3327733151eb9a6c6db82084f1833bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:07.542821 kubelet[1808]: E0319 11:44:07.542506 1808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07fa1d1f833bd0839e8e66cab398055ac3327733151eb9a6c6db82084f1833bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-q2mnk" Mar 19 11:44:07.542821 kubelet[1808]: E0319 11:44:07.542535 1808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07fa1d1f833bd0839e8e66cab398055ac3327733151eb9a6c6db82084f1833bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-q2mnk" Mar 19 11:44:07.543240 kubelet[1808]: E0319 11:44:07.542585 1808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-q2mnk_default(ac7589c1-2b13-4e45-8172-5c4933a8cd33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-q2mnk_default(ac7589c1-2b13-4e45-8172-5c4933a8cd33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07fa1d1f833bd0839e8e66cab398055ac3327733151eb9a6c6db82084f1833bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-q2mnk" podUID="ac7589c1-2b13-4e45-8172-5c4933a8cd33" Mar 19 11:44:07.545751 containerd[1479]: time="2025-03-19T11:44:07.545701126Z" level=error msg="Failed to destroy network for sandbox \"066cc552543197aa076b75cb82693a2436c0fbc8380e386684e33f8cdb8f5871\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:07.546551 containerd[1479]: time="2025-03-19T11:44:07.546194371Z" level=error msg="encountered an error cleaning up failed sandbox \"066cc552543197aa076b75cb82693a2436c0fbc8380e386684e33f8cdb8f5871\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:07.546551 containerd[1479]: time="2025-03-19T11:44:07.546366116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkgw6,Uid:28cc52f2-b4a5-4277-864b-64eb1318d7af,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"066cc552543197aa076b75cb82693a2436c0fbc8380e386684e33f8cdb8f5871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:07.546807 kubelet[1808]: E0319 11:44:07.546629 1808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"066cc552543197aa076b75cb82693a2436c0fbc8380e386684e33f8cdb8f5871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:44:07.546807 kubelet[1808]: E0319 11:44:07.546689 1808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"066cc552543197aa076b75cb82693a2436c0fbc8380e386684e33f8cdb8f5871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tkgw6" Mar 19 11:44:07.546807 kubelet[1808]: E0319 11:44:07.546710 1808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"066cc552543197aa076b75cb82693a2436c0fbc8380e386684e33f8cdb8f5871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tkgw6" Mar 19 11:44:07.547256 kubelet[1808]: E0319 11:44:07.546753 1808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tkgw6_calico-system(28cc52f2-b4a5-4277-864b-64eb1318d7af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tkgw6_calico-system(28cc52f2-b4a5-4277-864b-64eb1318d7af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"066cc552543197aa076b75cb82693a2436c0fbc8380e386684e33f8cdb8f5871\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tkgw6" podUID="28cc52f2-b4a5-4277-864b-64eb1318d7af" Mar 19 11:44:07.646370 containerd[1479]: time="2025-03-19T11:44:07.646213861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:07.647959 containerd[1479]: time="2025-03-19T11:44:07.647613599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 19 11:44:07.647959 containerd[1479]: time="2025-03-19T11:44:07.647813709Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:07.650225 containerd[1479]: time="2025-03-19T11:44:07.650168649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:07.651052 containerd[1479]: time="2025-03-19T11:44:07.651004047Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 6.424565761s" Mar 19 11:44:07.651052 containerd[1479]: time="2025-03-19T11:44:07.651051325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 19 11:44:07.677916 containerd[1479]: time="2025-03-19T11:44:07.677869285Z" level=info msg="CreateContainer within sandbox \"a25745fcc1f81d7eb54ce3bab3ed976eded201e3dbc5c0c3a52b2fe3beb1ad63\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 19 11:44:07.688284 containerd[1479]: time="2025-03-19T11:44:07.688220796Z" level=info msg="CreateContainer within sandbox \"a25745fcc1f81d7eb54ce3bab3ed976eded201e3dbc5c0c3a52b2fe3beb1ad63\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c2c4b960f47f95267f1bfce5c148915d850a1c216c7e55994a311745b9b2b101\"" Mar 19 11:44:07.689152 containerd[1479]: time="2025-03-19T11:44:07.689116057Z" level=info msg="StartContainer for \"c2c4b960f47f95267f1bfce5c148915d850a1c216c7e55994a311745b9b2b101\"" Mar 19 11:44:07.793806 systemd[1]: Started cri-containerd-c2c4b960f47f95267f1bfce5c148915d850a1c216c7e55994a311745b9b2b101.scope - libcontainer container c2c4b960f47f95267f1bfce5c148915d850a1c216c7e55994a311745b9b2b101. Mar 19 11:44:07.845693 containerd[1479]: time="2025-03-19T11:44:07.845624502Z" level=info msg="StartContainer for \"c2c4b960f47f95267f1bfce5c148915d850a1c216c7e55994a311745b9b2b101\" returns successfully" Mar 19 11:44:07.938909 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 19 11:44:07.939093 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 19 11:44:08.062756 kubelet[1808]: E0319 11:44:08.062692 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:08.247916 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-066cc552543197aa076b75cb82693a2436c0fbc8380e386684e33f8cdb8f5871-shm.mount: Deactivated successfully. Mar 19 11:44:08.248043 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-07fa1d1f833bd0839e8e66cab398055ac3327733151eb9a6c6db82084f1833bd-shm.mount: Deactivated successfully. Mar 19 11:44:08.248130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3184006480.mount: Deactivated successfully. Mar 19 11:44:08.310699 kubelet[1808]: E0319 11:44:08.309998 1808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 19 11:44:08.317541 kubelet[1808]: I0319 11:44:08.316299 1808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="066cc552543197aa076b75cb82693a2436c0fbc8380e386684e33f8cdb8f5871" Mar 19 11:44:08.317687 containerd[1479]: time="2025-03-19T11:44:08.316986180Z" level=info msg="StopPodSandbox for \"066cc552543197aa076b75cb82693a2436c0fbc8380e386684e33f8cdb8f5871\"" Mar 19 11:44:08.317687 containerd[1479]: time="2025-03-19T11:44:08.317216267Z" level=info msg="Ensure that sandbox 066cc552543197aa076b75cb82693a2436c0fbc8380e386684e33f8cdb8f5871 in task-service has been cleanup successfully" Mar 19 11:44:08.317687 containerd[1479]: time="2025-03-19T11:44:08.317409313Z" level=info msg="TearDown network for sandbox \"066cc552543197aa076b75cb82693a2436c0fbc8380e386684e33f8cdb8f5871\" successfully" Mar 19 11:44:08.317687 containerd[1479]: time="2025-03-19T11:44:08.317423408Z" level=info msg="StopPodSandbox for \"066cc552543197aa076b75cb82693a2436c0fbc8380e386684e33f8cdb8f5871\" returns successfully" Mar 19 11:44:08.320240 containerd[1479]: time="2025-03-19T11:44:08.318249619Z" level=info msg="StopPodSandbox for \"b4760979dae1813fbe51bca467607d092abf17e2aa2b867fa38a4245081f771b\"" Mar 19 11:44:08.320240 containerd[1479]: time="2025-03-19T11:44:08.318342544Z" level=info msg="TearDown network for sandbox \"b4760979dae1813fbe51bca467607d092abf17e2aa2b867fa38a4245081f771b\" successfully" Mar 19 11:44:08.320240 containerd[1479]: time="2025-03-19T11:44:08.318353703Z" level=info msg="StopPodSandbox for \"b4760979dae1813fbe51bca467607d092abf17e2aa2b867fa38a4245081f771b\" returns successfully" Mar 19 11:44:08.320240 containerd[1479]: time="2025-03-19T11:44:08.318916198Z" level=info msg="StopPodSandbox for \"25dbbbcd58c09b05275e55efd3b9f8340ed93b7a78f177719839fd1031dc9d3f\"" Mar 19 11:44:08.320240 containerd[1479]: time="2025-03-19T11:44:08.319003086Z" level=info msg="TearDown network for sandbox \"25dbbbcd58c09b05275e55efd3b9f8340ed93b7a78f177719839fd1031dc9d3f\" successfully" Mar 19 11:44:08.320240 containerd[1479]: time="2025-03-19T11:44:08.319013839Z" level=info msg="StopPodSandbox for \"25dbbbcd58c09b05275e55efd3b9f8340ed93b7a78f177719839fd1031dc9d3f\" returns successfully" Mar 19 11:44:08.321298 systemd[1]: run-netns-cni\x2d0eca13d8\x2d5e07\x2d12d2\x2dfe07\x2d0917af6b3137.mount: Deactivated successfully. Mar 19 11:44:08.323542 containerd[1479]: time="2025-03-19T11:44:08.322199628Z" level=info msg="StopPodSandbox for \"9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c\"" Mar 19 11:44:08.323542 containerd[1479]: time="2025-03-19T11:44:08.322325949Z" level=info msg="TearDown network for sandbox \"9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c\" successfully" Mar 19 11:44:08.323542 containerd[1479]: time="2025-03-19T11:44:08.322338177Z" level=info msg="StopPodSandbox for \"9bf173c36dc42256bf6eecbaa99a2026f759093beaba6ef33d6958a4802cb58c\" returns successfully" Mar 19 11:44:08.325375 containerd[1479]: time="2025-03-19T11:44:08.325336441Z" level=info msg="StopPodSandbox for \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\"" Mar 19 11:44:08.325554 containerd[1479]: time="2025-03-19T11:44:08.325448628Z" level=info msg="TearDown network for sandbox \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\" successfully" Mar 19 11:44:08.325554 containerd[1479]: time="2025-03-19T11:44:08.325459497Z" level=info msg="StopPodSandbox for \"f9a206a99b9d34bdbd85fe08f76979432a046fe4f7ce7925b3d365ce37729d91\" returns successfully" Mar 19 11:44:08.325989 kubelet[1808]: I0319 11:44:08.325950 1808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07fa1d1f833bd0839e8e66cab398055ac3327733151eb9a6c6db82084f1833bd" Mar 19 11:44:08.327104 containerd[1479]: time="2025-03-19T11:44:08.327062681Z" level=info msg="StopPodSandbox for \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\"" Mar 19 11:44:08.327210 containerd[1479]: time="2025-03-19T11:44:08.327180144Z" level=info msg="TearDown network for sandbox \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\" successfully" Mar 19 11:44:08.327210 containerd[1479]: time="2025-03-19T11:44:08.327192194Z" level=info msg="StopPodSandbox for \"b6287bbdc100118aac7e1ffc4c3ed5e110dd4d32935b9f902bcdb99cec993eb7\" returns successfully" Mar 19 11:44:08.327843 containerd[1479]: time="2025-03-19T11:44:08.327818624Z" level=info msg="StopPodSandbox for \"07fa1d1f833bd0839e8e66cab398055ac3327733151eb9a6c6db82084f1833bd\"" Mar 19 11:44:08.328140 containerd[1479]: time="2025-03-19T11:44:08.327939105Z" level=info msg="StopPodSandbox for \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\"" Mar 19 11:44:08.328140 containerd[1479]: time="2025-03-19T11:44:08.328034001Z" level=info msg="TearDown network for sandbox \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\" successfully" Mar 19 11:44:08.328140 containerd[1479]: time="2025-03-19T11:44:08.328044257Z" level=info msg="StopPodSandbox for \"10bf8a6188e21f369a89af0aa31a2a0fe7ad21a1a4bffec6f3b82c90f6de8384\" returns successfully" Mar 19 11:44:08.328140 containerd[1479]: time="2025-03-19T11:44:08.328073512Z" level=info msg="Ensure that sandbox 07fa1d1f833bd0839e8e66cab398055ac3327733151eb9a6c6db82084f1833bd in task-service has been cleanup successfully" Mar 19 11:44:08.328765 containerd[1479]: time="2025-03-19T11:44:08.328737171Z" level=info msg="TearDown network for sandbox \"07fa1d1f833bd0839e8e66cab398055ac3327733151eb9a6c6db82084f1833bd\" successfully" Mar 19 11:44:08.328765 containerd[1479]: time="2025-03-19T11:44:08.328762983Z" level=info msg="StopPodSandbox for \"07fa1d1f833bd0839e8e66cab398055ac3327733151eb9a6c6db82084f1833bd\" returns successfully" Mar 19 11:44:08.329256 containerd[1479]: time="2025-03-19T11:44:08.328928490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkgw6,Uid:28cc52f2-b4a5-4277-864b-64eb1318d7af,Namespace:calico-system,Attempt:7,}" Mar 19 11:44:08.331437 systemd[1]: run-netns-cni\x2d4f406d2c\x2d116c\x2de0d5\x2ddb6e\x2d5420fde73644.mount: Deactivated successfully. Mar 19 11:44:08.334233 containerd[1479]: time="2025-03-19T11:44:08.332467456Z" level=info msg="StopPodSandbox for \"22082988529ac4c73e91ebf0cbe337363b06544b378aff616d48e9e06a66d66a\"" Mar 19 11:44:08.334377 containerd[1479]: time="2025-03-19T11:44:08.334352538Z" level=info msg="TearDown network for sandbox \"22082988529ac4c73e91ebf0cbe337363b06544b378aff616d48e9e06a66d66a\" successfully" Mar 19 11:44:08.334427 containerd[1479]: time="2025-03-19T11:44:08.334376457Z" level=info msg="StopPodSandbox for \"22082988529ac4c73e91ebf0cbe337363b06544b378aff616d48e9e06a66d66a\" returns successfully" Mar 19 11:44:08.335974 containerd[1479]: time="2025-03-19T11:44:08.335928012Z" level=info msg="StopPodSandbox for \"7c85e7ca31dd7214185aad2714194264695b2f727cb149eea2e81a13027e7494\"" Mar 19 11:44:08.336136 containerd[1479]: time="2025-03-19T11:44:08.336042729Z" level=info msg="TearDown network for sandbox \"7c85e7ca31dd7214185aad2714194264695b2f727cb149eea2e81a13027e7494\" successfully" Mar 19 11:44:08.336136 containerd[1479]: time="2025-03-19T11:44:08.336062616Z" level=info msg="StopPodSandbox for \"7c85e7ca31dd7214185aad2714194264695b2f727cb149eea2e81a13027e7494\" returns successfully" Mar 19 11:44:08.336866 containerd[1479]: time="2025-03-19T11:44:08.336688125Z" level=info msg="StopPodSandbox for \"1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880\"" Mar 19 11:44:08.336866 containerd[1479]: time="2025-03-19T11:44:08.336791656Z" level=info msg="TearDown network for sandbox \"1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880\" successfully" Mar 19 11:44:08.336866 containerd[1479]: time="2025-03-19T11:44:08.336804283Z" level=info msg="StopPodSandbox for \"1e15de7c8355b4b19de519758110b134f6cb02e5520d086bb1e6c783308b2880\" returns successfully" Mar 19 11:44:08.337773 containerd[1479]: time="2025-03-19T11:44:08.337737954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-q2mnk,Uid:ac7589c1-2b13-4e45-8172-5c4933a8cd33,Namespace:default,Attempt:4,}" Mar 19 11:44:08.364386 kubelet[1808]: I0319 11:44:08.364038 1808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xqqjg" podStartSLOduration=4.096251946 podStartE2EDuration="19.364016657s" podCreationTimestamp="2025-03-19 11:43:49 +0000 UTC" firstStartedPulling="2025-03-19 11:43:52.385280319 +0000 UTC m=+4.124965176" lastFinishedPulling="2025-03-19 11:44:07.653045024 +0000 UTC m=+19.392729887" observedRunningTime="2025-03-19 11:44:08.342832546 +0000 UTC m=+20.082517409" watchObservedRunningTime="2025-03-19 11:44:08.364016657 +0000 UTC m=+20.103701509" Mar 19 11:44:08.627710 systemd-networkd[1393]: calia9a756595af: Link UP Mar 19 11:44:08.627991 systemd-networkd[1393]: calia9a756595af: Gained carrier Mar 19 11:44:08.645637 containerd[1479]: 2025-03-19 11:44:08.431 [INFO][2725] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 19 11:44:08.645637 containerd[1479]: 2025-03-19 11:44:08.477 [INFO][2725] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {146.190.145.41-k8s-csi--node--driver--tkgw6-eth0 csi-node-driver- calico-system 28cc52f2-b4a5-4277-864b-64eb1318d7af 1074 0 2025-03-19 11:43:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:54877d75d5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 146.190.145.41 csi-node-driver-tkgw6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia9a756595af [] []}} ContainerID="1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1" Namespace="calico-system" Pod="csi-node-driver-tkgw6" WorkloadEndpoint="146.190.145.41-k8s-csi--node--driver--tkgw6-" Mar 19 11:44:08.645637 containerd[1479]: 2025-03-19 11:44:08.478 [INFO][2725] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1" Namespace="calico-system" Pod="csi-node-driver-tkgw6" WorkloadEndpoint="146.190.145.41-k8s-csi--node--driver--tkgw6-eth0" Mar 19 11:44:08.645637 containerd[1479]: 2025-03-19 11:44:08.533 [INFO][2774] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1" HandleID="k8s-pod-network.1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1" Workload="146.190.145.41-k8s-csi--node--driver--tkgw6-eth0" Mar 19 11:44:08.645637 containerd[1479]: 2025-03-19 11:44:08.553 [INFO][2774] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1" HandleID="k8s-pod-network.1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1" Workload="146.190.145.41-k8s-csi--node--driver--tkgw6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290c40), Attrs:map[string]string{"namespace":"calico-system", "node":"146.190.145.41", "pod":"csi-node-driver-tkgw6", "timestamp":"2025-03-19 11:44:08.533919521 +0000 UTC"}, Hostname:"146.190.145.41", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 11:44:08.645637 containerd[1479]: 2025-03-19 11:44:08.553 [INFO][2774] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 11:44:08.645637 containerd[1479]: 2025-03-19 11:44:08.553 [INFO][2774] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 11:44:08.645637 containerd[1479]: 2025-03-19 11:44:08.553 [INFO][2774] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '146.190.145.41' Mar 19 11:44:08.645637 containerd[1479]: 2025-03-19 11:44:08.558 [INFO][2774] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1" host="146.190.145.41" Mar 19 11:44:08.645637 containerd[1479]: 2025-03-19 11:44:08.565 [INFO][2774] ipam/ipam.go 372: Looking up existing affinities for host host="146.190.145.41" Mar 19 11:44:08.645637 containerd[1479]: 2025-03-19 11:44:08.575 [INFO][2774] ipam/ipam.go 489: Trying affinity for 192.168.72.192/26 host="146.190.145.41" Mar 19 11:44:08.645637 containerd[1479]: 2025-03-19 11:44:08.581 [INFO][2774] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.192/26 host="146.190.145.41" Mar 19 11:44:08.645637 containerd[1479]: 2025-03-19 11:44:08.585 [INFO][2774] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.192/26 host="146.190.145.41" Mar 19 11:44:08.645637 containerd[1479]: 2025-03-19 11:44:08.585 [INFO][2774] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.192/26 handle="k8s-pod-network.1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1" host="146.190.145.41" Mar 19 11:44:08.645637 containerd[1479]: 2025-03-19 11:44:08.589 [INFO][2774] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1 Mar 19 11:44:08.645637 containerd[1479]: 2025-03-19 11:44:08.601 [INFO][2774] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.192/26 handle="k8s-pod-network.1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1" host="146.190.145.41" Mar 19 11:44:08.645637 containerd[1479]: 2025-03-19 11:44:08.613 [INFO][2774] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.193/26] block=192.168.72.192/26 handle="k8s-pod-network.1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1" host="146.190.145.41" Mar 19 11:44:08.645637 containerd[1479]: 2025-03-19 11:44:08.613 [INFO][2774] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.193/26] handle="k8s-pod-network.1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1" host="146.190.145.41" Mar 19 11:44:08.645637 containerd[1479]: 2025-03-19 11:44:08.613 [INFO][2774] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 11:44:08.645637 containerd[1479]: 2025-03-19 11:44:08.613 [INFO][2774] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.193/26] IPv6=[] ContainerID="1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1" HandleID="k8s-pod-network.1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1" Workload="146.190.145.41-k8s-csi--node--driver--tkgw6-eth0" Mar 19 11:44:08.646423 containerd[1479]: 2025-03-19 11:44:08.617 [INFO][2725] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1" Namespace="calico-system" Pod="csi-node-driver-tkgw6" WorkloadEndpoint="146.190.145.41-k8s-csi--node--driver--tkgw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.145.41-k8s-csi--node--driver--tkgw6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"28cc52f2-b4a5-4277-864b-64eb1318d7af", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 43, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"54877d75d5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.145.41", ContainerID:"", Pod:"csi-node-driver-tkgw6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia9a756595af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:44:08.646423 containerd[1479]: 2025-03-19 11:44:08.617 [INFO][2725] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.193/32] ContainerID="1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1" Namespace="calico-system" Pod="csi-node-driver-tkgw6" WorkloadEndpoint="146.190.145.41-k8s-csi--node--driver--tkgw6-eth0" Mar 19 11:44:08.646423 containerd[1479]: 2025-03-19 11:44:08.617 [INFO][2725] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9a756595af ContainerID="1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1" Namespace="calico-system" Pod="csi-node-driver-tkgw6" WorkloadEndpoint="146.190.145.41-k8s-csi--node--driver--tkgw6-eth0" Mar 19 11:44:08.646423 containerd[1479]: 2025-03-19 11:44:08.628 [INFO][2725] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1" Namespace="calico-system" Pod="csi-node-driver-tkgw6" WorkloadEndpoint="146.190.145.41-k8s-csi--node--driver--tkgw6-eth0" Mar 19 11:44:08.646423 containerd[1479]: 2025-03-19 11:44:08.629 [INFO][2725] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1" Namespace="calico-system" Pod="csi-node-driver-tkgw6" WorkloadEndpoint="146.190.145.41-k8s-csi--node--driver--tkgw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.145.41-k8s-csi--node--driver--tkgw6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"28cc52f2-b4a5-4277-864b-64eb1318d7af", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 43, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"54877d75d5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.145.41", ContainerID:"1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1", Pod:"csi-node-driver-tkgw6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia9a756595af", MAC:"4a:18:bb:ac:6c:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:44:08.646423 containerd[1479]: 2025-03-19 11:44:08.643 [INFO][2725] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1" Namespace="calico-system" Pod="csi-node-driver-tkgw6" WorkloadEndpoint="146.190.145.41-k8s-csi--node--driver--tkgw6-eth0" Mar 19 11:44:08.676723 containerd[1479]: time="2025-03-19T11:44:08.676388824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:44:08.676723 containerd[1479]: time="2025-03-19T11:44:08.676470974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:44:08.677213 containerd[1479]: time="2025-03-19T11:44:08.676883482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:08.678223 containerd[1479]: time="2025-03-19T11:44:08.678033659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:08.710811 systemd[1]: Started cri-containerd-1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1.scope - libcontainer container 1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1. Mar 19 11:44:08.743068 containerd[1479]: time="2025-03-19T11:44:08.743018584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkgw6,Uid:28cc52f2-b4a5-4277-864b-64eb1318d7af,Namespace:calico-system,Attempt:7,} returns sandbox id \"1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1\"" Mar 19 11:44:08.746057 containerd[1479]: time="2025-03-19T11:44:08.745849585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 19 11:44:08.801350 systemd-networkd[1393]: cali147913740c8: Link UP Mar 19 11:44:08.801593 systemd-networkd[1393]: cali147913740c8: Gained carrier Mar 19 11:44:08.820751 containerd[1479]: 2025-03-19 11:44:08.457 [INFO][2744] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 19 11:44:08.820751 containerd[1479]: 2025-03-19 11:44:08.483 [INFO][2744] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {146.190.145.41-k8s-nginx--deployment--7fcdb87857--q2mnk-eth0 nginx-deployment-7fcdb87857- default ac7589c1-2b13-4e45-8172-5c4933a8cd33 1209 0 2025-03-19 11:44:02 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 146.190.145.41 nginx-deployment-7fcdb87857-q2mnk eth0 default [] [] [kns.default ksa.default.default] cali147913740c8 [] []}} ContainerID="8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c" Namespace="default" Pod="nginx-deployment-7fcdb87857-q2mnk" WorkloadEndpoint="146.190.145.41-k8s-nginx--deployment--7fcdb87857--q2mnk-" Mar 19 11:44:08.820751 containerd[1479]: 2025-03-19 11:44:08.483 [INFO][2744] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c" Namespace="default" Pod="nginx-deployment-7fcdb87857-q2mnk" WorkloadEndpoint="146.190.145.41-k8s-nginx--deployment--7fcdb87857--q2mnk-eth0" Mar 19 11:44:08.820751 containerd[1479]: 2025-03-19 11:44:08.537 [INFO][2779] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c" HandleID="k8s-pod-network.8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c" Workload="146.190.145.41-k8s-nginx--deployment--7fcdb87857--q2mnk-eth0" Mar 19 11:44:08.820751 containerd[1479]: 2025-03-19 11:44:08.555 [INFO][2779] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c" HandleID="k8s-pod-network.8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c" Workload="146.190.145.41-k8s-nginx--deployment--7fcdb87857--q2mnk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031af70), Attrs:map[string]string{"namespace":"default", "node":"146.190.145.41", "pod":"nginx-deployment-7fcdb87857-q2mnk", "timestamp":"2025-03-19 11:44:08.537193808 +0000 UTC"}, Hostname:"146.190.145.41", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 11:44:08.820751 containerd[1479]: 2025-03-19 11:44:08.555 [INFO][2779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 11:44:08.820751 containerd[1479]: 2025-03-19 11:44:08.613 [INFO][2779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 11:44:08.820751 containerd[1479]: 2025-03-19 11:44:08.613 [INFO][2779] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '146.190.145.41' Mar 19 11:44:08.820751 containerd[1479]: 2025-03-19 11:44:08.662 [INFO][2779] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c" host="146.190.145.41" Mar 19 11:44:08.820751 containerd[1479]: 2025-03-19 11:44:08.674 [INFO][2779] ipam/ipam.go 372: Looking up existing affinities for host host="146.190.145.41" Mar 19 11:44:08.820751 containerd[1479]: 2025-03-19 11:44:08.683 [INFO][2779] ipam/ipam.go 489: Trying affinity for 192.168.72.192/26 host="146.190.145.41" Mar 19 11:44:08.820751 containerd[1479]: 2025-03-19 11:44:08.687 [INFO][2779] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.192/26 host="146.190.145.41" Mar 19 11:44:08.820751 containerd[1479]: 2025-03-19 11:44:08.694 [INFO][2779] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.192/26 host="146.190.145.41" Mar 19 11:44:08.820751 containerd[1479]: 2025-03-19 11:44:08.694 [INFO][2779] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.192/26 handle="k8s-pod-network.8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c" host="146.190.145.41" Mar 19 11:44:08.820751 containerd[1479]: 2025-03-19 11:44:08.700 [INFO][2779] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c Mar 19 11:44:08.820751 containerd[1479]: 2025-03-19 11:44:08.775 [INFO][2779] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.192/26 handle="k8s-pod-network.8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c" host="146.190.145.41" Mar 19 11:44:08.820751 containerd[1479]: 2025-03-19 11:44:08.794 [INFO][2779] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.194/26] block=192.168.72.192/26 handle="k8s-pod-network.8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c" host="146.190.145.41" Mar 19 11:44:08.820751 containerd[1479]: 2025-03-19 11:44:08.794 [INFO][2779] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.194/26] handle="k8s-pod-network.8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c" host="146.190.145.41" Mar 19 11:44:08.820751 containerd[1479]: 2025-03-19 11:44:08.794 [INFO][2779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 11:44:08.820751 containerd[1479]: 2025-03-19 11:44:08.795 [INFO][2779] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.194/26] IPv6=[] ContainerID="8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c" HandleID="k8s-pod-network.8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c" Workload="146.190.145.41-k8s-nginx--deployment--7fcdb87857--q2mnk-eth0" Mar 19 11:44:08.821980 containerd[1479]: 2025-03-19 11:44:08.796 [INFO][2744] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c" Namespace="default" Pod="nginx-deployment-7fcdb87857-q2mnk" WorkloadEndpoint="146.190.145.41-k8s-nginx--deployment--7fcdb87857--q2mnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.145.41-k8s-nginx--deployment--7fcdb87857--q2mnk-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"ac7589c1-2b13-4e45-8172-5c4933a8cd33", ResourceVersion:"1209", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 44, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.145.41", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-q2mnk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.72.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali147913740c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:44:08.821980 containerd[1479]: 2025-03-19 11:44:08.797 [INFO][2744] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.194/32] ContainerID="8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c" Namespace="default" Pod="nginx-deployment-7fcdb87857-q2mnk" WorkloadEndpoint="146.190.145.41-k8s-nginx--deployment--7fcdb87857--q2mnk-eth0" Mar 19 11:44:08.821980 containerd[1479]: 2025-03-19 11:44:08.797 [INFO][2744] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali147913740c8 ContainerID="8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c" Namespace="default" Pod="nginx-deployment-7fcdb87857-q2mnk" WorkloadEndpoint="146.190.145.41-k8s-nginx--deployment--7fcdb87857--q2mnk-eth0" Mar 19 11:44:08.821980 containerd[1479]: 2025-03-19 11:44:08.802 [INFO][2744] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c" Namespace="default" Pod="nginx-deployment-7fcdb87857-q2mnk" WorkloadEndpoint="146.190.145.41-k8s-nginx--deployment--7fcdb87857--q2mnk-eth0" Mar 19 11:44:08.821980 containerd[1479]: 2025-03-19 11:44:08.802 [INFO][2744] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c" Namespace="default" Pod="nginx-deployment-7fcdb87857-q2mnk" WorkloadEndpoint="146.190.145.41-k8s-nginx--deployment--7fcdb87857--q2mnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.145.41-k8s-nginx--deployment--7fcdb87857--q2mnk-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"ac7589c1-2b13-4e45-8172-5c4933a8cd33", ResourceVersion:"1209", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 44, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.145.41", ContainerID:"8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c", Pod:"nginx-deployment-7fcdb87857-q2mnk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.72.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali147913740c8", MAC:"e2:09:83:63:e6:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:44:08.821980 containerd[1479]: 2025-03-19 11:44:08.818 [INFO][2744] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c" Namespace="default" Pod="nginx-deployment-7fcdb87857-q2mnk" WorkloadEndpoint="146.190.145.41-k8s-nginx--deployment--7fcdb87857--q2mnk-eth0" Mar 19 11:44:08.848964 containerd[1479]: time="2025-03-19T11:44:08.848708301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:44:08.848964 containerd[1479]: time="2025-03-19T11:44:08.848781615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:44:08.848964 containerd[1479]: time="2025-03-19T11:44:08.848800929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:08.848964 containerd[1479]: time="2025-03-19T11:44:08.848902771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:08.874825 systemd[1]: Started cri-containerd-8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c.scope - libcontainer container 8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c. Mar 19 11:44:08.926890 containerd[1479]: time="2025-03-19T11:44:08.926841857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-q2mnk,Uid:ac7589c1-2b13-4e45-8172-5c4933a8cd33,Namespace:default,Attempt:4,} returns sandbox id \"8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c\"" Mar 19 11:44:09.039915 kubelet[1808]: E0319 11:44:09.039836 1808 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:09.063605 kubelet[1808]: E0319 11:44:09.063526 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:09.335691 kubelet[1808]: E0319 11:44:09.335646 1808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 19 11:44:09.827527 kernel: bpftool[3032]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 19 11:44:10.064446 kubelet[1808]: E0319 11:44:10.064374 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:10.083592 containerd[1479]: time="2025-03-19T11:44:10.081415828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:10.083592 containerd[1479]: time="2025-03-19T11:44:10.082437705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 19 11:44:10.083592 containerd[1479]: time="2025-03-19T11:44:10.083118016Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:10.086515 containerd[1479]: time="2025-03-19T11:44:10.085460028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:10.086515 containerd[1479]: time="2025-03-19T11:44:10.086166999Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 1.34025945s" Mar 19 11:44:10.086515 containerd[1479]: time="2025-03-19T11:44:10.086197014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 19 11:44:10.090556 containerd[1479]: time="2025-03-19T11:44:10.088599383Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 19 11:44:10.090556 containerd[1479]: time="2025-03-19T11:44:10.089068650Z" level=info msg="CreateContainer within sandbox \"1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 19 11:44:10.115305 containerd[1479]: time="2025-03-19T11:44:10.115246652Z" level=info msg="CreateContainer within sandbox \"1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fec13e2383b2b36b9851f8760808b321280274d5698ab859e7946610ace8ab5b\"" Mar 19 11:44:10.117626 containerd[1479]: time="2025-03-19T11:44:10.116131707Z" level=info msg="StartContainer for \"fec13e2383b2b36b9851f8760808b321280274d5698ab859e7946610ace8ab5b\"" Mar 19 11:44:10.134711 systemd-networkd[1393]: cali147913740c8: Gained IPv6LL Mar 19 11:44:10.179793 systemd[1]: Started cri-containerd-fec13e2383b2b36b9851f8760808b321280274d5698ab859e7946610ace8ab5b.scope - libcontainer container fec13e2383b2b36b9851f8760808b321280274d5698ab859e7946610ace8ab5b. Mar 19 11:44:10.219887 systemd-networkd[1393]: vxlan.calico: Link UP Mar 19 11:44:10.219898 systemd-networkd[1393]: vxlan.calico: Gained carrier Mar 19 11:44:10.242256 containerd[1479]: time="2025-03-19T11:44:10.241483171Z" level=info msg="StartContainer for \"fec13e2383b2b36b9851f8760808b321280274d5698ab859e7946610ace8ab5b\" returns successfully" Mar 19 11:44:10.247849 systemd[1]: run-containerd-runc-k8s.io-fec13e2383b2b36b9851f8760808b321280274d5698ab859e7946610ace8ab5b-runc.N8A92C.mount: Deactivated successfully. Mar 19 11:44:10.326744 systemd-networkd[1393]: calia9a756595af: Gained IPv6LL Mar 19 11:44:10.350115 kubelet[1808]: E0319 11:44:10.349974 1808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 19 11:44:10.386816 systemd[1]: run-containerd-runc-k8s.io-c2c4b960f47f95267f1bfce5c148915d850a1c216c7e55994a311745b9b2b101-runc.vYBhvB.mount: Deactivated successfully. Mar 19 11:44:11.064810 kubelet[1808]: E0319 11:44:11.064736 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:11.863088 systemd-networkd[1393]: vxlan.calico: Gained IPv6LL Mar 19 11:44:12.066247 kubelet[1808]: E0319 11:44:12.065895 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:12.841754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2016183956.mount: Deactivated successfully. Mar 19 11:44:13.066299 kubelet[1808]: E0319 11:44:13.066162 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:14.067655 kubelet[1808]: E0319 11:44:14.067580 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:14.545322 containerd[1479]: time="2025-03-19T11:44:14.543748835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:14.546625 containerd[1479]: time="2025-03-19T11:44:14.546544753Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73060131" Mar 19 11:44:14.548083 containerd[1479]: time="2025-03-19T11:44:14.547990468Z" level=info msg="ImageCreate event name:\"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:14.555286 containerd[1479]: time="2025-03-19T11:44:14.555231879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:14.557281 containerd[1479]: time="2025-03-19T11:44:14.557036257Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 4.468391291s" Mar 19 11:44:14.557281 containerd[1479]: time="2025-03-19T11:44:14.557134912Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 19 11:44:14.560931 containerd[1479]: time="2025-03-19T11:44:14.560869012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 19 11:44:14.562260 containerd[1479]: time="2025-03-19T11:44:14.562176548Z" level=info msg="CreateContainer within sandbox \"8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 19 11:44:14.586215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4110071412.mount: Deactivated successfully. Mar 19 11:44:14.588945 containerd[1479]: time="2025-03-19T11:44:14.588866182Z" level=info msg="CreateContainer within sandbox \"8dfd25dfefd8bc8bd53a1d4f716ef5b8cbdc5bdae0bdc46af044573ead782f4c\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d9a1e68cc78ae6d57b8c4a7f3614c4d5f58497de02be0eb860b850be8aa9db6f\"" Mar 19 11:44:14.591531 containerd[1479]: time="2025-03-19T11:44:14.590335867Z" level=info msg="StartContainer for \"d9a1e68cc78ae6d57b8c4a7f3614c4d5f58497de02be0eb860b850be8aa9db6f\"" Mar 19 11:44:14.644031 systemd[1]: run-containerd-runc-k8s.io-d9a1e68cc78ae6d57b8c4a7f3614c4d5f58497de02be0eb860b850be8aa9db6f-runc.s7m9wp.mount: Deactivated successfully. Mar 19 11:44:14.651807 systemd[1]: Started cri-containerd-d9a1e68cc78ae6d57b8c4a7f3614c4d5f58497de02be0eb860b850be8aa9db6f.scope - libcontainer container d9a1e68cc78ae6d57b8c4a7f3614c4d5f58497de02be0eb860b850be8aa9db6f. Mar 19 11:44:14.692318 containerd[1479]: time="2025-03-19T11:44:14.691062312Z" level=info msg="StartContainer for \"d9a1e68cc78ae6d57b8c4a7f3614c4d5f58497de02be0eb860b850be8aa9db6f\" returns successfully" Mar 19 11:44:15.068040 kubelet[1808]: E0319 11:44:15.067965 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:16.069305 kubelet[1808]: E0319 11:44:16.069217 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:16.212388 containerd[1479]: time="2025-03-19T11:44:16.212309100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:16.213389 containerd[1479]: time="2025-03-19T11:44:16.212876131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 19 11:44:16.214437 containerd[1479]: time="2025-03-19T11:44:16.214362114Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:16.217240 containerd[1479]: time="2025-03-19T11:44:16.217156536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:16.219079 containerd[1479]: time="2025-03-19T11:44:16.218556432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 1.657625548s" Mar 19 11:44:16.219079 containerd[1479]: time="2025-03-19T11:44:16.218617089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 19 11:44:16.222399 containerd[1479]: time="2025-03-19T11:44:16.222336900Z" level=info msg="CreateContainer within sandbox \"1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 19 11:44:16.243846 containerd[1479]: time="2025-03-19T11:44:16.243730722Z" level=info msg="CreateContainer within sandbox \"1035e0327bc29953ca45462b7033e07ddfc13097924b80533d565e42b917fee1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"50d11bc3f7d9e5f41c033c7c8ed667fa0edac090e4551bc81b3bbf4f1457e18d\"" Mar 19 11:44:16.245403 containerd[1479]: time="2025-03-19T11:44:16.245318570Z" level=info msg="StartContainer for \"50d11bc3f7d9e5f41c033c7c8ed667fa0edac090e4551bc81b3bbf4f1457e18d\"" Mar 19 11:44:16.307935 systemd[1]: Started cri-containerd-50d11bc3f7d9e5f41c033c7c8ed667fa0edac090e4551bc81b3bbf4f1457e18d.scope - libcontainer container 50d11bc3f7d9e5f41c033c7c8ed667fa0edac090e4551bc81b3bbf4f1457e18d. Mar 19 11:44:16.357331 containerd[1479]: time="2025-03-19T11:44:16.357124171Z" level=info msg="StartContainer for \"50d11bc3f7d9e5f41c033c7c8ed667fa0edac090e4551bc81b3bbf4f1457e18d\" returns successfully" Mar 19 11:44:16.417750 kubelet[1808]: I0319 11:44:16.417554 1808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-q2mnk" podStartSLOduration=8.78643459 podStartE2EDuration="14.417528724s" podCreationTimestamp="2025-03-19 11:44:02 +0000 UTC" firstStartedPulling="2025-03-19 11:44:08.928185236 +0000 UTC m=+20.667870086" lastFinishedPulling="2025-03-19 11:44:14.559279367 +0000 UTC m=+26.298964220" observedRunningTime="2025-03-19 11:44:15.402104892 +0000 UTC m=+27.141789763" watchObservedRunningTime="2025-03-19 11:44:16.417528724 +0000 UTC m=+28.157213590" Mar 19 11:44:16.418019 kubelet[1808]: I0319 11:44:16.417968 1808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tkgw6" podStartSLOduration=19.942732992 podStartE2EDuration="27.417956739s" podCreationTimestamp="2025-03-19 11:43:49 +0000 UTC" firstStartedPulling="2025-03-19 11:44:08.745007715 +0000 UTC m=+20.484692567" lastFinishedPulling="2025-03-19 11:44:16.220231447 +0000 UTC m=+27.959916314" observedRunningTime="2025-03-19 11:44:16.417778219 +0000 UTC m=+28.157463091" watchObservedRunningTime="2025-03-19 11:44:16.417956739 +0000 UTC m=+28.157641610" Mar 19 11:44:17.069927 kubelet[1808]: E0319 11:44:17.069837 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:17.166835 kubelet[1808]: I0319 11:44:17.166743 1808 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 19 11:44:17.166835 kubelet[1808]: I0319 11:44:17.166795 1808 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 19 11:44:18.070819 kubelet[1808]: E0319 11:44:18.070747 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:19.072019 kubelet[1808]: E0319 11:44:19.071953 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:20.072816 kubelet[1808]: E0319 11:44:20.072751 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:21.074004 kubelet[1808]: E0319 11:44:21.073920 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:21.424916 systemd[1]: Created slice kubepods-besteffort-pod2360fece_7249_466c_9692_7c0e79971479.slice - libcontainer container kubepods-besteffort-pod2360fece_7249_466c_9692_7c0e79971479.slice. Mar 19 11:44:21.429599 kubelet[1808]: I0319 11:44:21.429536 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/2360fece-7249-466c-9692-7c0e79971479-data\") pod \"nfs-server-provisioner-0\" (UID: \"2360fece-7249-466c-9692-7c0e79971479\") " pod="default/nfs-server-provisioner-0" Mar 19 11:44:21.429782 kubelet[1808]: I0319 11:44:21.429614 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgldz\" (UniqueName: \"kubernetes.io/projected/2360fece-7249-466c-9692-7c0e79971479-kube-api-access-wgldz\") pod \"nfs-server-provisioner-0\" (UID: \"2360fece-7249-466c-9692-7c0e79971479\") " pod="default/nfs-server-provisioner-0" Mar 19 11:44:21.729973 containerd[1479]: time="2025-03-19T11:44:21.729780786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:2360fece-7249-466c-9692-7c0e79971479,Namespace:default,Attempt:0,}" Mar 19 11:44:21.962855 systemd-networkd[1393]: cali60e51b789ff: Link UP Mar 19 11:44:21.963049 systemd-networkd[1393]: cali60e51b789ff: Gained carrier Mar 19 11:44:21.992377 containerd[1479]: 2025-03-19 11:44:21.793 [INFO][3318] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {146.190.145.41-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 2360fece-7249-466c-9692-7c0e79971479 1307 0 2025-03-19 11:44:21 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 146.190.145.41 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.145.41-k8s-nfs--server--provisioner--0-" Mar 19 11:44:21.992377 containerd[1479]: 2025-03-19 11:44:21.793 [INFO][3318] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.145.41-k8s-nfs--server--provisioner--0-eth0" Mar 19 11:44:21.992377 containerd[1479]: 2025-03-19 11:44:21.852 [INFO][3330] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3" HandleID="k8s-pod-network.a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3" Workload="146.190.145.41-k8s-nfs--server--provisioner--0-eth0" Mar 19 11:44:21.992377 containerd[1479]: 2025-03-19 11:44:21.874 [INFO][3330] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3" HandleID="k8s-pod-network.a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3" Workload="146.190.145.41-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265d00), Attrs:map[string]string{"namespace":"default", "node":"146.190.145.41", "pod":"nfs-server-provisioner-0", "timestamp":"2025-03-19 11:44:21.852213511 +0000 UTC"}, Hostname:"146.190.145.41", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 11:44:21.992377 containerd[1479]: 2025-03-19 11:44:21.874 [INFO][3330] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 11:44:21.992377 containerd[1479]: 2025-03-19 11:44:21.874 [INFO][3330] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 11:44:21.992377 containerd[1479]: 2025-03-19 11:44:21.874 [INFO][3330] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '146.190.145.41' Mar 19 11:44:21.992377 containerd[1479]: 2025-03-19 11:44:21.881 [INFO][3330] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3" host="146.190.145.41" Mar 19 11:44:21.992377 containerd[1479]: 2025-03-19 11:44:21.890 [INFO][3330] ipam/ipam.go 372: Looking up existing affinities for host host="146.190.145.41" Mar 19 11:44:21.992377 containerd[1479]: 2025-03-19 11:44:21.902 [INFO][3330] ipam/ipam.go 489: Trying affinity for 192.168.72.192/26 host="146.190.145.41" Mar 19 11:44:21.992377 containerd[1479]: 2025-03-19 11:44:21.917 [INFO][3330] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.192/26 host="146.190.145.41" Mar 19 11:44:21.992377 containerd[1479]: 2025-03-19 11:44:21.921 [INFO][3330] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.192/26 host="146.190.145.41" Mar 19 11:44:21.992377 containerd[1479]: 2025-03-19 11:44:21.921 [INFO][3330] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.192/26 handle="k8s-pod-network.a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3" host="146.190.145.41" Mar 19 11:44:21.992377 containerd[1479]: 2025-03-19 11:44:21.925 [INFO][3330] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3 Mar 19 11:44:21.992377 containerd[1479]: 2025-03-19 11:44:21.943 [INFO][3330] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.192/26 handle="k8s-pod-network.a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3" host="146.190.145.41" Mar 19 11:44:21.992377 containerd[1479]: 2025-03-19 11:44:21.959 [INFO][3330] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.195/26] block=192.168.72.192/26 handle="k8s-pod-network.a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3" host="146.190.145.41" Mar 19 11:44:21.992377 containerd[1479]: 2025-03-19 11:44:21.959 [INFO][3330] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.195/26] handle="k8s-pod-network.a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3" host="146.190.145.41" Mar 19 11:44:21.992377 containerd[1479]: 2025-03-19 11:44:21.959 [INFO][3330] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 11:44:21.992377 containerd[1479]: 2025-03-19 11:44:21.959 [INFO][3330] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.195/26] IPv6=[] ContainerID="a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3" HandleID="k8s-pod-network.a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3" Workload="146.190.145.41-k8s-nfs--server--provisioner--0-eth0" Mar 19 11:44:21.994238 containerd[1479]: 2025-03-19 11:44:21.960 [INFO][3318] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.145.41-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.145.41-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"2360fece-7249-466c-9692-7c0e79971479", ResourceVersion:"1307", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 44, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.145.41", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.72.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:44:21.994238 containerd[1479]: 2025-03-19 11:44:21.961 [INFO][3318] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.195/32] ContainerID="a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.145.41-k8s-nfs--server--provisioner--0-eth0" Mar 19 11:44:21.994238 containerd[1479]: 2025-03-19 11:44:21.961 [INFO][3318] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.145.41-k8s-nfs--server--provisioner--0-eth0" Mar 19 11:44:21.994238 containerd[1479]: 2025-03-19 11:44:21.963 [INFO][3318] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.145.41-k8s-nfs--server--provisioner--0-eth0" Mar 19 11:44:21.994430 containerd[1479]: 2025-03-19 11:44:21.964 [INFO][3318] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.145.41-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.145.41-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"2360fece-7249-466c-9692-7c0e79971479", ResourceVersion:"1307", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 44, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.145.41", ContainerID:"a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.72.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"b6:2e:ae:cc:e4:b0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:44:21.994430 containerd[1479]: 2025-03-19 11:44:21.989 [INFO][3318] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.145.41-k8s-nfs--server--provisioner--0-eth0" Mar 19 11:44:22.024665 containerd[1479]: time="2025-03-19T11:44:22.024346787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:44:22.025003 containerd[1479]: time="2025-03-19T11:44:22.024878314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:44:22.025003 containerd[1479]: time="2025-03-19T11:44:22.024931734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:22.025379 containerd[1479]: time="2025-03-19T11:44:22.025308341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:22.048369 systemd[1]: run-containerd-runc-k8s.io-a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3-runc.qR0XEM.mount: Deactivated successfully. Mar 19 11:44:22.063213 systemd[1]: Started cri-containerd-a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3.scope - libcontainer container a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3. Mar 19 11:44:22.074668 kubelet[1808]: E0319 11:44:22.074600 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:22.111894 containerd[1479]: time="2025-03-19T11:44:22.111777091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:2360fece-7249-466c-9692-7c0e79971479,Namespace:default,Attempt:0,} returns sandbox id \"a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3\"" Mar 19 11:44:22.113987 containerd[1479]: time="2025-03-19T11:44:22.113951223Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 19 11:44:22.398196 systemd[1]: Started sshd@7-146.190.145.41:22-77.105.181.82:40418.service - OpenSSH per-connection server daemon (77.105.181.82:40418). Mar 19 11:44:23.063786 systemd-networkd[1393]: cali60e51b789ff: Gained IPv6LL Mar 19 11:44:23.082770 kubelet[1808]: E0319 11:44:23.075017 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:23.574447 sshd[3393]: Invalid user smile from 77.105.181.82 port 40418 Mar 19 11:44:23.812543 sshd[3393]: Received disconnect from 77.105.181.82 port 40418:11: Bye Bye [preauth] Mar 19 11:44:23.812543 sshd[3393]: Disconnected from invalid user smile 77.105.181.82 port 40418 [preauth] Mar 19 11:44:23.815797 systemd[1]: sshd@7-146.190.145.41:22-77.105.181.82:40418.service: Deactivated successfully. Mar 19 11:44:24.076128 kubelet[1808]: E0319 11:44:24.075884 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:24.419786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1536664374.mount: Deactivated successfully. Mar 19 11:44:25.076835 kubelet[1808]: E0319 11:44:25.076794 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:26.078449 kubelet[1808]: E0319 11:44:26.078385 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:26.633576 containerd[1479]: time="2025-03-19T11:44:26.633463284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:26.634728 containerd[1479]: time="2025-03-19T11:44:26.634657176Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Mar 19 11:44:26.635550 containerd[1479]: time="2025-03-19T11:44:26.635461576Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:26.642538 containerd[1479]: time="2025-03-19T11:44:26.642425226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:26.644200 containerd[1479]: time="2025-03-19T11:44:26.644011064Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.529833128s" Mar 19 11:44:26.644200 containerd[1479]: time="2025-03-19T11:44:26.644070202Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Mar 19 11:44:26.647652 containerd[1479]: time="2025-03-19T11:44:26.647598631Z" level=info msg="CreateContainer within sandbox \"a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 19 11:44:26.667262 containerd[1479]: time="2025-03-19T11:44:26.666712374Z" level=info msg="CreateContainer within sandbox \"a29c29e8f6d6781fe251e5c93d6d4f99a66085729dbfdb19a73832cb036a59d3\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"3db97fd385988b6e648a8fb0fd54bb2b04f0f54f25c03c37b76725f578e1e9b0\"" Mar 19 11:44:26.670260 containerd[1479]: time="2025-03-19T11:44:26.670113713Z" level=info msg="StartContainer for \"3db97fd385988b6e648a8fb0fd54bb2b04f0f54f25c03c37b76725f578e1e9b0\"" Mar 19 11:44:26.729224 systemd[1]: Started cri-containerd-3db97fd385988b6e648a8fb0fd54bb2b04f0f54f25c03c37b76725f578e1e9b0.scope - libcontainer container 3db97fd385988b6e648a8fb0fd54bb2b04f0f54f25c03c37b76725f578e1e9b0. Mar 19 11:44:26.775398 containerd[1479]: time="2025-03-19T11:44:26.775323720Z" level=info msg="StartContainer for \"3db97fd385988b6e648a8fb0fd54bb2b04f0f54f25c03c37b76725f578e1e9b0\" returns successfully" Mar 19 11:44:27.010781 update_engine[1469]: I20250319 11:44:27.010537 1469 update_attempter.cc:509] Updating boot flags... Mar 19 11:44:27.064587 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3506) Mar 19 11:44:27.079268 kubelet[1808]: E0319 11:44:27.079094 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:27.167820 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3508) Mar 19 11:44:28.079388 kubelet[1808]: E0319 11:44:28.079332 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:29.039669 kubelet[1808]: E0319 11:44:29.039609 1808 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:29.080824 kubelet[1808]: E0319 11:44:29.080751 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:30.081580 kubelet[1808]: E0319 11:44:30.081511 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:31.082081 kubelet[1808]: E0319 11:44:31.082015 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:32.083215 kubelet[1808]: E0319 11:44:32.083090 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:33.083780 kubelet[1808]: E0319 11:44:33.083714 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:34.084674 kubelet[1808]: E0319 11:44:34.084604 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:35.085005 kubelet[1808]: E0319 11:44:35.084859 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:36.086175 kubelet[1808]: E0319 11:44:36.086095 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:36.560272 kubelet[1808]: I0319 11:44:36.560188 1808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.028156233 podStartE2EDuration="15.560162348s" podCreationTimestamp="2025-03-19 11:44:21 +0000 UTC" firstStartedPulling="2025-03-19 11:44:22.113628097 +0000 UTC m=+33.853312951" lastFinishedPulling="2025-03-19 11:44:26.645634202 +0000 UTC m=+38.385319066" observedRunningTime="2025-03-19 11:44:27.509327264 +0000 UTC m=+39.249012167" watchObservedRunningTime="2025-03-19 11:44:36.560162348 +0000 UTC m=+48.299847224" Mar 19 11:44:36.571096 systemd[1]: Created slice kubepods-besteffort-pod91c2c665_391e_4d01_aae0_2a621b4bf37a.slice - libcontainer container kubepods-besteffort-pod91c2c665_391e_4d01_aae0_2a621b4bf37a.slice. Mar 19 11:44:36.744084 kubelet[1808]: I0319 11:44:36.743997 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3048a32f-02ef-4605-9c00-baaf2e858c2d\" (UniqueName: \"kubernetes.io/nfs/91c2c665-391e-4d01-aae0-2a621b4bf37a-pvc-3048a32f-02ef-4605-9c00-baaf2e858c2d\") pod \"test-pod-1\" (UID: \"91c2c665-391e-4d01-aae0-2a621b4bf37a\") " pod="default/test-pod-1" Mar 19 11:44:36.744084 kubelet[1808]: I0319 11:44:36.744073 1808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd4c5\" (UniqueName: \"kubernetes.io/projected/91c2c665-391e-4d01-aae0-2a621b4bf37a-kube-api-access-jd4c5\") pod \"test-pod-1\" (UID: \"91c2c665-391e-4d01-aae0-2a621b4bf37a\") " pod="default/test-pod-1" Mar 19 11:44:36.899566 kernel: FS-Cache: Loaded Mar 19 11:44:36.992875 kernel: RPC: Registered named UNIX socket transport module. Mar 19 11:44:36.993116 kernel: RPC: Registered udp transport module. Mar 19 11:44:36.993186 kernel: RPC: Registered tcp transport module. Mar 19 11:44:36.993235 kernel: RPC: Registered tcp-with-tls transport module. Mar 19 11:44:36.993864 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 19 11:44:37.086912 kubelet[1808]: E0319 11:44:37.086822 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:37.314601 kernel: NFS: Registering the id_resolver key type Mar 19 11:44:37.316823 kernel: Key type id_resolver registered Mar 19 11:44:37.318676 kernel: Key type id_legacy registered Mar 19 11:44:37.371892 nfsidmap[3541]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.0-4-956bb2dfea' Mar 19 11:44:37.380030 nfsidmap[3543]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.0-4-956bb2dfea' Mar 19 11:44:37.476246 containerd[1479]: time="2025-03-19T11:44:37.476161179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:91c2c665-391e-4d01-aae0-2a621b4bf37a,Namespace:default,Attempt:0,}" Mar 19 11:44:37.757989 systemd-networkd[1393]: cali5ec59c6bf6e: Link UP Mar 19 11:44:37.761010 systemd-networkd[1393]: cali5ec59c6bf6e: Gained carrier Mar 19 11:44:37.803884 containerd[1479]: 2025-03-19 11:44:37.585 [INFO][3545] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {146.190.145.41-k8s-test--pod--1-eth0 default 91c2c665-391e-4d01-aae0-2a621b4bf37a 1369 0 2025-03-19 11:44:22 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 146.190.145.41 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.145.41-k8s-test--pod--1-" Mar 19 11:44:37.803884 containerd[1479]: 2025-03-19 11:44:37.586 [INFO][3545] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.145.41-k8s-test--pod--1-eth0" Mar 19 11:44:37.803884 containerd[1479]: 2025-03-19 11:44:37.644 [INFO][3556] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c" HandleID="k8s-pod-network.ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c" Workload="146.190.145.41-k8s-test--pod--1-eth0" Mar 19 11:44:37.803884 containerd[1479]: 2025-03-19 11:44:37.671 [INFO][3556] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c" HandleID="k8s-pod-network.ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c" Workload="146.190.145.41-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292f60), Attrs:map[string]string{"namespace":"default", "node":"146.190.145.41", "pod":"test-pod-1", "timestamp":"2025-03-19 11:44:37.644297719 +0000 UTC"}, Hostname:"146.190.145.41", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 11:44:37.803884 containerd[1479]: 2025-03-19 11:44:37.671 [INFO][3556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 11:44:37.803884 containerd[1479]: 2025-03-19 11:44:37.671 [INFO][3556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 11:44:37.803884 containerd[1479]: 2025-03-19 11:44:37.671 [INFO][3556] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '146.190.145.41' Mar 19 11:44:37.803884 containerd[1479]: 2025-03-19 11:44:37.678 [INFO][3556] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c" host="146.190.145.41" Mar 19 11:44:37.803884 containerd[1479]: 2025-03-19 11:44:37.687 [INFO][3556] ipam/ipam.go 372: Looking up existing affinities for host host="146.190.145.41" Mar 19 11:44:37.803884 containerd[1479]: 2025-03-19 11:44:37.698 [INFO][3556] ipam/ipam.go 489: Trying affinity for 192.168.72.192/26 host="146.190.145.41" Mar 19 11:44:37.803884 containerd[1479]: 2025-03-19 11:44:37.703 [INFO][3556] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.192/26 host="146.190.145.41" Mar 19 11:44:37.803884 containerd[1479]: 2025-03-19 11:44:37.711 [INFO][3556] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.192/26 host="146.190.145.41" Mar 19 11:44:37.803884 containerd[1479]: 2025-03-19 11:44:37.711 [INFO][3556] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.192/26 handle="k8s-pod-network.ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c" host="146.190.145.41" Mar 19 11:44:37.803884 containerd[1479]: 2025-03-19 11:44:37.716 [INFO][3556] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c Mar 19 11:44:37.803884 containerd[1479]: 2025-03-19 11:44:37.727 [INFO][3556] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.192/26 handle="k8s-pod-network.ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c" host="146.190.145.41" Mar 19 11:44:37.803884 containerd[1479]: 2025-03-19 11:44:37.748 [INFO][3556] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.196/26] block=192.168.72.192/26 handle="k8s-pod-network.ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c" host="146.190.145.41" Mar 19 11:44:37.803884 containerd[1479]: 2025-03-19 11:44:37.748 [INFO][3556] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.196/26] handle="k8s-pod-network.ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c" host="146.190.145.41" Mar 19 11:44:37.803884 containerd[1479]: 2025-03-19 11:44:37.748 [INFO][3556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 11:44:37.803884 containerd[1479]: 2025-03-19 11:44:37.749 [INFO][3556] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.196/26] IPv6=[] ContainerID="ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c" HandleID="k8s-pod-network.ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c" Workload="146.190.145.41-k8s-test--pod--1-eth0" Mar 19 11:44:37.803884 containerd[1479]: 2025-03-19 11:44:37.751 [INFO][3545] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.145.41-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.145.41-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"91c2c665-391e-4d01-aae0-2a621b4bf37a", ResourceVersion:"1369", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.145.41", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.72.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:44:37.805254 containerd[1479]: 2025-03-19 11:44:37.751 [INFO][3545] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.196/32] ContainerID="ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.145.41-k8s-test--pod--1-eth0" Mar 19 11:44:37.805254 containerd[1479]: 2025-03-19 11:44:37.751 [INFO][3545] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.145.41-k8s-test--pod--1-eth0" Mar 19 11:44:37.805254 containerd[1479]: 2025-03-19 11:44:37.761 [INFO][3545] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.145.41-k8s-test--pod--1-eth0" Mar 19 11:44:37.805254 containerd[1479]: 2025-03-19 11:44:37.762 [INFO][3545] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.145.41-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.145.41-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"91c2c665-391e-4d01-aae0-2a621b4bf37a", ResourceVersion:"1369", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.145.41", ContainerID:"ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.72.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"a2:2f:36:49:13:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:44:37.805254 containerd[1479]: 2025-03-19 11:44:37.800 [INFO][3545] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.145.41-k8s-test--pod--1-eth0" Mar 19 11:44:37.848808 containerd[1479]: time="2025-03-19T11:44:37.846761728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:44:37.848808 containerd[1479]: time="2025-03-19T11:44:37.846845612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:44:37.848808 containerd[1479]: time="2025-03-19T11:44:37.846860985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:37.848808 containerd[1479]: time="2025-03-19T11:44:37.846991023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:37.883974 systemd[1]: run-containerd-runc-k8s.io-ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c-runc.28NTbt.mount: Deactivated successfully. Mar 19 11:44:37.895827 systemd[1]: Started cri-containerd-ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c.scope - libcontainer container ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c. Mar 19 11:44:37.962920 containerd[1479]: time="2025-03-19T11:44:37.962838559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:91c2c665-391e-4d01-aae0-2a621b4bf37a,Namespace:default,Attempt:0,} returns sandbox id \"ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c\"" Mar 19 11:44:37.969777 containerd[1479]: time="2025-03-19T11:44:37.969301467Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 19 11:44:38.087657 kubelet[1808]: E0319 11:44:38.087574 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:38.297546 containerd[1479]: time="2025-03-19T11:44:38.297230873Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:38.298382 containerd[1479]: time="2025-03-19T11:44:38.298233137Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Mar 19 11:44:38.303688 containerd[1479]: time="2025-03-19T11:44:38.303578184Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 334.211057ms" Mar 19 11:44:38.303688 containerd[1479]: time="2025-03-19T11:44:38.303686987Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 19 11:44:38.309688 containerd[1479]: time="2025-03-19T11:44:38.309343396Z" level=info msg="CreateContainer within sandbox \"ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 19 11:44:38.340146 containerd[1479]: time="2025-03-19T11:44:38.339885614Z" level=info msg="CreateContainer within sandbox \"ffaa1e9aef73b064e5c0851bd6d0a333d5f4948ee4b69ebf763c482538254a3c\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"5fbf60dced1c8beb924371ebf985a14d6d2f87e14c17425b4e55bedfcda3e2b2\"" Mar 19 11:44:38.342024 containerd[1479]: time="2025-03-19T11:44:38.341068450Z" level=info msg="StartContainer for \"5fbf60dced1c8beb924371ebf985a14d6d2f87e14c17425b4e55bedfcda3e2b2\"" Mar 19 11:44:38.384959 systemd[1]: Started cri-containerd-5fbf60dced1c8beb924371ebf985a14d6d2f87e14c17425b4e55bedfcda3e2b2.scope - libcontainer container 5fbf60dced1c8beb924371ebf985a14d6d2f87e14c17425b4e55bedfcda3e2b2. Mar 19 11:44:38.425925 containerd[1479]: time="2025-03-19T11:44:38.425740293Z" level=info msg="StartContainer for \"5fbf60dced1c8beb924371ebf985a14d6d2f87e14c17425b4e55bedfcda3e2b2\" returns successfully" Mar 19 11:44:38.871182 systemd-networkd[1393]: cali5ec59c6bf6e: Gained IPv6LL Mar 19 11:44:39.088724 kubelet[1808]: E0319 11:44:39.088323 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:40.089309 kubelet[1808]: E0319 11:44:40.089243 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:40.469837 kubelet[1808]: E0319 11:44:40.469698 1808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 19 11:44:40.502547 kubelet[1808]: I0319 11:44:40.501104 1808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.164194672 podStartE2EDuration="18.501077188s" podCreationTimestamp="2025-03-19 11:44:22 +0000 UTC" firstStartedPulling="2025-03-19 11:44:37.968344068 +0000 UTC m=+49.708028931" lastFinishedPulling="2025-03-19 11:44:38.305226577 +0000 UTC m=+50.044911447" observedRunningTime="2025-03-19 11:44:38.559405594 +0000 UTC m=+50.299090467" watchObservedRunningTime="2025-03-19 11:44:40.501077188 +0000 UTC m=+52.240762053" Mar 19 11:44:41.089901 kubelet[1808]: E0319 11:44:41.089828 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 19 11:44:42.090306 kubelet[1808]: E0319 11:44:42.090226 1808 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"