Feb 13 20:20:05.534365 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:20:05.534404 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:20:05.534423 kernel: BIOS-provided physical RAM map: Feb 13 20:20:05.534435 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 20:20:05.534445 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 20:20:05.534455 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 20:20:05.534468 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Feb 13 20:20:05.534479 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Feb 13 20:20:05.534490 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 20:20:05.534503 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 20:20:05.534539 kernel: NX (Execute Disable) protection: active Feb 13 20:20:05.534550 kernel: APIC: Static calls initialized Feb 13 20:20:05.534566 kernel: SMBIOS 2.8 present. Feb 13 20:20:05.534578 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Feb 13 20:20:05.534591 kernel: Hypervisor detected: KVM Feb 13 20:20:05.534606 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:20:05.534624 kernel: kvm-clock: using sched offset of 4197180509 cycles Feb 13 20:20:05.534636 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:20:05.534649 kernel: tsc: Detected 1995.312 MHz processor Feb 13 20:20:05.534660 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:20:05.534672 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:20:05.534685 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Feb 13 20:20:05.534696 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 20:20:05.534708 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:20:05.534724 kernel: ACPI: Early table checksum verification disabled Feb 13 20:20:05.534736 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Feb 13 20:20:05.534748 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:20:05.534760 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:20:05.534772 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:20:05.534784 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 13 20:20:05.534795 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:20:05.534826 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:20:05.534837 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:20:05.534853 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:20:05.534864 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Feb 13 20:20:05.535191 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Feb 13 20:20:05.535206 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 13 20:20:05.535219 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Feb 13 20:20:05.535231 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Feb 13 20:20:05.535243 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Feb 13 20:20:05.535269 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Feb 13 20:20:05.535282 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:20:05.535294 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:20:05.535307 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 20:20:05.535319 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 20:20:05.535339 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Feb 13 20:20:05.535351 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Feb 13 20:20:05.535367 kernel: Zone ranges: Feb 13 20:20:05.535380 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:20:05.535392 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Feb 13 20:20:05.535405 kernel: Normal empty Feb 13 20:20:05.535417 kernel: Movable zone start for each node Feb 13 20:20:05.535429 kernel: Early memory node ranges Feb 13 20:20:05.535441 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 20:20:05.535651 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Feb 13 20:20:05.535667 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Feb 13 20:20:05.535749 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:20:05.535766 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 20:20:05.535877 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Feb 13 20:20:05.535946 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 20:20:05.535960 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:20:05.536115 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:20:05.536139 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 20:20:05.536152 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:20:05.536332 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:20:05.536358 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:20:05.536372 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:20:05.536386 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:20:05.536400 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 20:20:05.536413 kernel: TSC deadline timer available Feb 13 20:20:05.536426 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:20:05.536439 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 20:20:05.536452 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 13 20:20:05.536473 kernel: Booting paravirtualized kernel on KVM Feb 13 20:20:05.536489 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:20:05.536500 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:20:05.536512 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:20:05.536523 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:20:05.536534 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:20:05.536546 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 13 20:20:05.536560 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:20:05.536572 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:20:05.536588 kernel: random: crng init done Feb 13 20:20:05.536602 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:20:05.536614 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:20:05.536626 kernel: Fallback order for Node 0: 0 Feb 13 20:20:05.536639 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Feb 13 20:20:05.536651 kernel: Policy zone: DMA32 Feb 13 20:20:05.536663 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:20:05.536674 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 125148K reserved, 0K cma-reserved) Feb 13 20:20:05.536686 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:20:05.536701 kernel: Kernel/User page tables isolation: enabled Feb 13 20:20:05.536713 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:20:05.536725 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:20:05.536738 kernel: Dynamic Preempt: voluntary Feb 13 20:20:05.536750 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:20:05.536764 kernel: rcu: RCU event tracing is enabled. Feb 13 20:20:05.536780 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:20:05.536793 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:20:05.536850 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:20:05.536868 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:20:05.536882 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:20:05.536898 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:20:05.536915 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 20:20:05.536930 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:20:05.536949 kernel: Console: colour VGA+ 80x25 Feb 13 20:20:05.536961 kernel: printk: console [tty0] enabled Feb 13 20:20:05.536976 kernel: printk: console [ttyS0] enabled Feb 13 20:20:05.536988 kernel: ACPI: Core revision 20230628 Feb 13 20:20:05.537005 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 20:20:05.537016 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:20:05.537027 kernel: x2apic enabled Feb 13 20:20:05.537039 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:20:05.537054 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 20:20:05.537066 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Feb 13 20:20:05.537079 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) Feb 13 20:20:05.537093 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 20:20:05.537110 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 20:20:05.537141 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:20:05.537155 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 20:20:05.537169 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:20:05.537187 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:20:05.537201 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 13 20:20:05.537216 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:20:05.537229 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:20:05.537242 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 20:20:05.537260 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:20:05.537284 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:20:05.537297 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:20:05.537310 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:20:05.537323 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:20:05.537338 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 20:20:05.537352 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:20:05.537366 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:20:05.537380 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:20:05.537399 kernel: landlock: Up and running. Feb 13 20:20:05.537413 kernel: SELinux: Initializing. Feb 13 20:20:05.537428 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:20:05.537441 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:20:05.537457 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Feb 13 20:20:05.537472 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:20:05.537490 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:20:05.537507 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:20:05.537524 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Feb 13 20:20:05.537540 kernel: signal: max sigframe size: 1776 Feb 13 20:20:05.537556 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:20:05.537572 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:20:05.537587 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:20:05.537600 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:20:05.537614 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:20:05.537627 kernel: .... node #0, CPUs: #1 Feb 13 20:20:05.537642 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:20:05.537662 kernel: smpboot: Max logical packages: 1 Feb 13 20:20:05.537679 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Feb 13 20:20:05.537694 kernel: devtmpfs: initialized Feb 13 20:20:05.537707 kernel: x86/mm: Memory block size: 128MB Feb 13 20:20:05.537725 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:20:05.537740 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:20:05.537755 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:20:05.537770 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:20:05.537784 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:20:05.537844 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:20:05.537868 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:20:05.537882 kernel: audit: type=2000 audit(1739478002.768:1): state=initialized audit_enabled=0 res=1 Feb 13 20:20:05.537897 kernel: cpuidle: using governor menu Feb 13 20:20:05.537913 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:20:05.537927 kernel: dca service started, version 1.12.1 Feb 13 20:20:05.537941 kernel: PCI: Using configuration type 1 for base access Feb 13 20:20:05.537955 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:20:05.537979 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:20:05.537994 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:20:05.538013 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:20:05.538028 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:20:05.538042 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:20:05.538056 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:20:05.538070 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:20:05.538087 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:20:05.538101 kernel: ACPI: Interpreter enabled Feb 13 20:20:05.538115 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:20:05.538129 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:20:05.538146 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:20:05.538164 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:20:05.538178 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 13 20:20:05.538192 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:20:05.538537 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:20:05.538714 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 20:20:05.538871 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 20:20:05.538894 kernel: acpiphp: Slot [3] registered Feb 13 20:20:05.538908 kernel: acpiphp: Slot [4] registered Feb 13 20:20:05.538921 kernel: acpiphp: Slot [5] registered Feb 13 20:20:05.538933 kernel: acpiphp: Slot [6] registered Feb 13 20:20:05.538945 kernel: acpiphp: Slot [7] registered Feb 13 20:20:05.538958 kernel: acpiphp: Slot [8] registered Feb 13 20:20:05.538971 kernel: acpiphp: Slot [9] registered Feb 13 20:20:05.538984 kernel: acpiphp: Slot [10] registered Feb 13 20:20:05.538996 kernel: acpiphp: Slot [11] registered Feb 13 20:20:05.539012 kernel: acpiphp: Slot [12] registered Feb 13 20:20:05.539025 kernel: acpiphp: Slot [13] registered Feb 13 20:20:05.539037 kernel: acpiphp: Slot [14] registered Feb 13 20:20:05.539050 kernel: acpiphp: Slot [15] registered Feb 13 20:20:05.539063 kernel: acpiphp: Slot [16] registered Feb 13 20:20:05.539076 kernel: acpiphp: Slot [17] registered Feb 13 20:20:05.539089 kernel: acpiphp: Slot [18] registered Feb 13 20:20:05.539102 kernel: acpiphp: Slot [19] registered Feb 13 20:20:05.539115 kernel: acpiphp: Slot [20] registered Feb 13 20:20:05.539128 kernel: acpiphp: Slot [21] registered Feb 13 20:20:05.539144 kernel: acpiphp: Slot [22] registered Feb 13 20:20:05.539168 kernel: acpiphp: Slot [23] registered Feb 13 20:20:05.539180 kernel: acpiphp: Slot [24] registered Feb 13 20:20:05.539193 kernel: acpiphp: Slot [25] registered Feb 13 20:20:05.539206 kernel: acpiphp: Slot [26] registered Feb 13 20:20:05.539219 kernel: acpiphp: Slot [27] registered Feb 13 20:20:05.539231 kernel: acpiphp: Slot [28] registered Feb 13 20:20:05.539244 kernel: acpiphp: Slot [29] registered Feb 13 20:20:05.539257 kernel: acpiphp: Slot [30] registered Feb 13 20:20:05.539273 kernel: acpiphp: Slot [31] registered Feb 13 20:20:05.539378 kernel: PCI host bridge to bus 0000:00 Feb 13 20:20:05.539585 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:20:05.539710 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:20:05.539914 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:20:05.540064 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 20:20:05.540184 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 13 20:20:05.540304 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:20:05.540516 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 20:20:05.540667 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 20:20:05.540831 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 13 20:20:05.541063 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Feb 13 20:20:05.541220 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 13 20:20:05.541376 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 13 20:20:05.541525 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 13 20:20:05.541934 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 13 20:20:05.542112 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 13 20:20:05.542255 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Feb 13 20:20:05.542437 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 20:20:05.542612 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 13 20:20:05.542785 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 13 20:20:05.542981 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 13 20:20:05.543129 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 13 20:20:05.543274 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 13 20:20:05.543488 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Feb 13 20:20:05.543644 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 20:20:05.543915 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:20:05.544348 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:20:05.544534 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Feb 13 20:20:05.544685 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Feb 13 20:20:05.544845 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 13 20:20:05.545003 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:20:05.545165 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Feb 13 20:20:05.545313 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Feb 13 20:20:05.545450 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 13 20:20:05.545624 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Feb 13 20:20:05.545763 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Feb 13 20:20:05.546015 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Feb 13 20:20:05.546150 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 13 20:20:05.546331 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:20:05.546477 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 20:20:05.546612 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Feb 13 20:20:05.546745 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 13 20:20:05.546970 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:20:05.547112 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Feb 13 20:20:05.547243 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Feb 13 20:20:05.547377 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Feb 13 20:20:05.547537 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Feb 13 20:20:05.547686 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Feb 13 20:20:05.547860 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Feb 13 20:20:05.547877 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:20:05.547891 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:20:05.547905 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:20:05.547920 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:20:05.547939 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 20:20:05.547953 kernel: iommu: Default domain type: Translated Feb 13 20:20:05.547967 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:20:05.547981 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:20:05.547994 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:20:05.548024 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 20:20:05.548037 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Feb 13 20:20:05.548184 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 13 20:20:05.548379 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 13 20:20:05.548551 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:20:05.548569 kernel: vgaarb: loaded Feb 13 20:20:05.548583 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 20:20:05.548597 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 20:20:05.548611 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:20:05.548624 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:20:05.548638 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:20:05.548652 kernel: pnp: PnP ACPI init Feb 13 20:20:05.548666 kernel: pnp: PnP ACPI: found 4 devices Feb 13 20:20:05.548685 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:20:05.548699 kernel: NET: Registered PF_INET protocol family Feb 13 20:20:05.548712 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:20:05.548727 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 20:20:05.548741 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:20:05.548755 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:20:05.548768 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:20:05.548782 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 20:20:05.548855 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:20:05.548872 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:20:05.548886 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:20:05.548900 kernel: NET: Registered PF_XDP protocol family Feb 13 20:20:05.549064 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:20:05.549219 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:20:05.549346 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:20:05.549495 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 20:20:05.549686 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 13 20:20:05.549883 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 13 20:20:05.550082 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 20:20:05.550103 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 20:20:05.550286 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 63468 usecs Feb 13 20:20:05.550304 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:20:05.550318 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:20:05.550333 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Feb 13 20:20:05.550347 kernel: Initialise system trusted keyrings Feb 13 20:20:05.550366 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 20:20:05.550396 kernel: Key type asymmetric registered Feb 13 20:20:05.550411 kernel: Asymmetric key parser 'x509' registered Feb 13 20:20:05.550425 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:20:05.550452 kernel: io scheduler mq-deadline registered Feb 13 20:20:05.550477 kernel: io scheduler kyber registered Feb 13 20:20:05.550491 kernel: io scheduler bfq registered Feb 13 20:20:05.550505 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:20:05.550518 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 13 20:20:05.550536 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 20:20:05.550551 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 20:20:05.550565 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:20:05.550579 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:20:05.551413 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:20:05.551434 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:20:05.551448 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:20:05.551462 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:20:05.551698 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 20:20:05.551872 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 20:20:05.551999 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T20:20:04 UTC (1739478004) Feb 13 20:20:05.552141 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 20:20:05.552159 kernel: intel_pstate: CPU model not supported Feb 13 20:20:05.552172 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:20:05.552186 kernel: Segment Routing with IPv6 Feb 13 20:20:05.552199 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:20:05.552233 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:20:05.552254 kernel: Key type dns_resolver registered Feb 13 20:20:05.552268 kernel: IPI shorthand broadcast: enabled Feb 13 20:20:05.552281 kernel: sched_clock: Marking stable (2047007792, 204004579)->(2357649234, -106636863) Feb 13 20:20:05.552295 kernel: registered taskstats version 1 Feb 13 20:20:05.552310 kernel: Loading compiled-in X.509 certificates Feb 13 20:20:05.552324 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:20:05.552338 kernel: Key type .fscrypt registered Feb 13 20:20:05.552351 kernel: Key type fscrypt-provisioning registered Feb 13 20:20:05.552364 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:20:05.552382 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:20:05.552395 kernel: ima: No architecture policies found Feb 13 20:20:05.552409 kernel: clk: Disabling unused clocks Feb 13 20:20:05.552422 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:20:05.552436 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:20:05.552474 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:20:05.552491 kernel: Run /init as init process Feb 13 20:20:05.552505 kernel: with arguments: Feb 13 20:20:05.552520 kernel: /init Feb 13 20:20:05.552537 kernel: with environment: Feb 13 20:20:05.552550 kernel: HOME=/ Feb 13 20:20:05.552564 kernel: TERM=linux Feb 13 20:20:05.552577 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:20:05.552596 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:20:05.552614 systemd[1]: Detected virtualization kvm. Feb 13 20:20:05.552629 systemd[1]: Detected architecture x86-64. Feb 13 20:20:05.552648 systemd[1]: Running in initrd. Feb 13 20:20:05.552662 systemd[1]: No hostname configured, using default hostname. Feb 13 20:20:05.552676 systemd[1]: Hostname set to . Feb 13 20:20:05.552691 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:20:05.552706 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:20:05.552722 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:20:05.552737 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:20:05.552755 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:20:05.552773 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:20:05.552788 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:20:05.552879 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:20:05.552897 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:20:05.552912 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:20:05.552927 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:20:05.552942 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:20:05.552961 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:20:05.552975 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:20:05.552991 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:20:05.553009 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:20:05.553024 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:20:05.553039 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:20:05.553057 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:20:05.553072 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:20:05.553088 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:20:05.553103 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:20:05.553117 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:20:05.553133 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:20:05.553148 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:20:05.553163 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:20:05.553181 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:20:05.553196 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:20:05.553211 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:20:05.553226 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:20:05.553283 systemd-journald[183]: Collecting audit messages is disabled. Feb 13 20:20:05.553323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:20:05.553339 systemd-journald[183]: Journal started Feb 13 20:20:05.553374 systemd-journald[183]: Runtime Journal (/run/log/journal/e7fbc5ecfed041eeb4bbca9cca53143c) is 4.9M, max 39.3M, 34.4M free. Feb 13 20:20:05.561863 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:20:05.575691 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:20:05.568746 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:20:05.569925 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:20:05.581228 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:20:05.586089 systemd-modules-load[184]: Inserted module 'overlay' Feb 13 20:20:05.597629 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:20:05.614133 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:20:05.623367 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:20:05.714163 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:20:05.714208 kernel: Bridge firewalling registered Feb 13 20:20:05.672098 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 13 20:20:05.721028 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:20:05.723625 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:20:05.744072 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:20:05.784984 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:20:05.786284 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:20:05.792727 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:20:05.824231 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:20:05.841544 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:20:05.842966 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:20:05.870915 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:20:05.903536 dracut-cmdline[220]: dracut-dracut-053 Feb 13 20:20:05.909058 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:20:05.942291 systemd-resolved[218]: Positive Trust Anchors: Feb 13 20:20:05.943477 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:20:05.943541 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:20:05.953360 systemd-resolved[218]: Defaulting to hostname 'linux'. Feb 13 20:20:05.955433 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:20:05.957298 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:20:06.102855 kernel: SCSI subsystem initialized Feb 13 20:20:06.118840 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:20:06.149751 kernel: iscsi: registered transport (tcp) Feb 13 20:20:06.236876 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:20:06.236970 kernel: QLogic iSCSI HBA Driver Feb 13 20:20:06.343361 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:20:06.355432 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:20:06.432126 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:20:06.432248 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:20:06.433946 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:20:06.524136 kernel: raid6: avx2x4 gen() 15210 MB/s Feb 13 20:20:06.555494 kernel: raid6: avx2x2 gen() 18328 MB/s Feb 13 20:20:06.592098 kernel: raid6: avx2x1 gen() 14157 MB/s Feb 13 20:20:06.592200 kernel: raid6: using algorithm avx2x2 gen() 18328 MB/s Feb 13 20:20:06.600468 kernel: raid6: .... xor() 476 MB/s, rmw enabled Feb 13 20:20:06.600585 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:20:06.651473 kernel: xor: automatically using best checksumming function avx Feb 13 20:20:07.039632 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:20:07.061585 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:20:07.072868 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:20:07.134291 systemd-udevd[403]: Using default interface naming scheme 'v255'. Feb 13 20:20:07.142020 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:20:07.155868 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:20:07.202356 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Feb 13 20:20:07.275718 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:20:07.304329 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:20:07.423374 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:20:07.432239 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:20:07.462424 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:20:07.465156 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:20:07.466961 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:20:07.468041 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:20:07.479136 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:20:07.511291 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:20:07.600856 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Feb 13 20:20:07.663074 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 20:20:07.663373 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:20:07.663398 kernel: GPT:9289727 != 125829119 Feb 13 20:20:07.663416 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:20:07.663452 kernel: GPT:9289727 != 125829119 Feb 13 20:20:07.663469 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:20:07.663485 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:20:07.663502 kernel: scsi host0: Virtio SCSI HBA Feb 13 20:20:07.663737 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Feb 13 20:20:07.786867 kernel: virtio_blk virtio5: [vdb] 932 512-byte logical blocks (477 kB/466 KiB) Feb 13 20:20:07.787113 kernel: ACPI: bus type USB registered Feb 13 20:20:07.787156 kernel: libata version 3.00 loaded. Feb 13 20:20:07.787175 kernel: usbcore: registered new interface driver usbfs Feb 13 20:20:07.787194 kernel: usbcore: registered new interface driver hub Feb 13 20:20:07.787213 kernel: usbcore: registered new device driver usb Feb 13 20:20:07.787231 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 13 20:20:07.830156 kernel: scsi host1: ata_piix Feb 13 20:20:07.830406 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:20:07.830428 kernel: scsi host2: ata_piix Feb 13 20:20:07.830634 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Feb 13 20:20:07.830654 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Feb 13 20:20:07.895863 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (462) Feb 13 20:20:07.920158 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (449) Feb 13 20:20:07.939249 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:20:07.955679 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:20:07.977865 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:20:08.006163 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:20:08.007291 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:20:08.033777 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:20:08.033900 kernel: AES CTR mode by8 optimization enabled Feb 13 20:20:08.034430 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:20:08.037709 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:20:08.038030 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:20:08.094273 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 13 20:20:08.094583 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 13 20:20:08.094759 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 13 20:20:08.094947 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Feb 13 20:20:08.095096 kernel: hub 1-0:1.0: USB hub found Feb 13 20:20:08.099910 kernel: hub 1-0:1.0: 2 ports detected Feb 13 20:20:08.094089 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:20:08.094945 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:20:08.095217 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:20:08.096419 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:20:08.104429 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:20:08.147905 disk-uuid[513]: Primary Header is updated. Feb 13 20:20:08.147905 disk-uuid[513]: Secondary Entries is updated. Feb 13 20:20:08.147905 disk-uuid[513]: Secondary Header is updated. Feb 13 20:20:08.169075 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:20:08.178594 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:20:08.208843 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:20:08.258739 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:20:08.309040 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:20:08.353318 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:20:09.188840 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:20:09.190866 disk-uuid[532]: The operation has completed successfully. Feb 13 20:20:09.349148 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:20:09.349366 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:20:09.360931 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:20:09.381994 sh[564]: Success Feb 13 20:20:09.409816 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:20:09.529443 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:20:09.554560 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:20:09.574894 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:20:09.610116 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:20:09.610221 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:20:09.614242 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:20:09.616730 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:20:09.616844 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:20:09.635870 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:20:09.638923 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:20:09.654279 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:20:09.666646 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:20:09.684183 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:20:09.684271 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:20:09.686423 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:20:09.698924 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:20:09.715694 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:20:09.717582 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:20:09.731962 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:20:09.743174 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:20:09.980282 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:20:10.020166 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:20:10.043334 ignition[649]: Ignition 2.19.0 Feb 13 20:20:10.046438 ignition[649]: Stage: fetch-offline Feb 13 20:20:10.048435 ignition[649]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:10.048957 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:20:10.049179 ignition[649]: parsed url from cmdline: "" Feb 13 20:20:10.050605 systemd-networkd[755]: lo: Link UP Feb 13 20:20:10.049186 ignition[649]: no config URL provided Feb 13 20:20:10.050611 systemd-networkd[755]: lo: Gained carrier Feb 13 20:20:10.049196 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:20:10.054225 systemd-networkd[755]: Enumeration completed Feb 13 20:20:10.049213 ignition[649]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:20:10.054412 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:20:10.049223 ignition[649]: failed to fetch config: resource requires networking Feb 13 20:20:10.055147 systemd-networkd[755]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 20:20:10.049554 ignition[649]: Ignition finished successfully Feb 13 20:20:10.055153 systemd-networkd[755]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Feb 13 20:20:10.057144 systemd-networkd[755]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:20:10.057150 systemd-networkd[755]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:20:10.058679 systemd-networkd[755]: eth0: Link UP Feb 13 20:20:10.058686 systemd-networkd[755]: eth0: Gained carrier Feb 13 20:20:10.058701 systemd-networkd[755]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 20:20:10.058890 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:20:10.062369 systemd-networkd[755]: eth1: Link UP Feb 13 20:20:10.062376 systemd-networkd[755]: eth1: Gained carrier Feb 13 20:20:10.062396 systemd-networkd[755]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:20:10.063666 systemd[1]: Reached target network.target - Network. Feb 13 20:20:10.084419 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:20:10.129006 systemd-networkd[755]: eth0: DHCPv4 address 64.23.133.95/20, gateway 64.23.128.1 acquired from 169.254.169.253 Feb 13 20:20:10.136157 systemd-networkd[755]: eth1: DHCPv4 address 10.124.0.4/20 acquired from 169.254.169.253 Feb 13 20:20:10.144120 ignition[759]: Ignition 2.19.0 Feb 13 20:20:10.144134 ignition[759]: Stage: fetch Feb 13 20:20:10.144404 ignition[759]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:10.144418 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:20:10.145479 ignition[759]: parsed url from cmdline: "" Feb 13 20:20:10.145487 ignition[759]: no config URL provided Feb 13 20:20:10.145500 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:20:10.145522 ignition[759]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:20:10.145553 ignition[759]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Feb 13 20:20:10.190884 ignition[759]: GET result: OK Feb 13 20:20:10.191040 ignition[759]: parsing config with SHA512: 95e630d86fc2522f3677961942986abc8ca952a88890ddab060f9b31b33ef855c756f598e77d57c82213a560ab24f27e60d4d65065d58690da830e967ff1b026 Feb 13 20:20:10.201902 unknown[759]: fetched base config from "system" Feb 13 20:20:10.201930 unknown[759]: fetched base config from "system" Feb 13 20:20:10.202622 ignition[759]: fetch: fetch complete Feb 13 20:20:10.201941 unknown[759]: fetched user config from "digitalocean" Feb 13 20:20:10.202639 ignition[759]: fetch: fetch passed Feb 13 20:20:10.202755 ignition[759]: Ignition finished successfully Feb 13 20:20:10.207132 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:20:10.244648 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:20:10.289103 ignition[766]: Ignition 2.19.0 Feb 13 20:20:10.289122 ignition[766]: Stage: kargs Feb 13 20:20:10.289706 ignition[766]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:10.289724 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:20:10.296130 ignition[766]: kargs: kargs passed Feb 13 20:20:10.296262 ignition[766]: Ignition finished successfully Feb 13 20:20:10.299554 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:20:10.329200 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:20:10.373958 ignition[772]: Ignition 2.19.0 Feb 13 20:20:10.375344 ignition[772]: Stage: disks Feb 13 20:20:10.376092 ignition[772]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:10.376113 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:20:10.377626 ignition[772]: disks: disks passed Feb 13 20:20:10.377721 ignition[772]: Ignition finished successfully Feb 13 20:20:10.405441 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:20:10.408383 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:20:10.410721 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:20:10.412509 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:20:10.420403 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:20:10.421006 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:20:10.434777 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:20:10.458312 systemd-fsck[781]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:20:10.463451 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:20:10.488939 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:20:10.713832 kernel: EXT4-fs (vda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:20:10.715587 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:20:10.717294 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:20:10.744710 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:20:10.749826 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:20:10.752118 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Feb 13 20:20:10.771977 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:20:10.788870 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (789) Feb 13 20:20:10.788919 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:20:10.788931 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:20:10.789214 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:20:10.783226 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:20:10.783432 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:20:10.803083 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:20:10.812286 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:20:10.847920 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:20:10.835547 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:20:10.978849 initrd-setup-root[819]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:20:10.982474 coreos-metadata[792]: Feb 13 20:20:10.980 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:20:10.984153 coreos-metadata[791]: Feb 13 20:20:10.984 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:20:10.993947 initrd-setup-root[826]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:20:10.997304 coreos-metadata[791]: Feb 13 20:20:10.997 INFO Fetch successful Feb 13 20:20:10.998838 coreos-metadata[792]: Feb 13 20:20:10.998 INFO Fetch successful Feb 13 20:20:11.011598 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:20:11.011047 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Feb 13 20:20:11.024571 coreos-metadata[792]: Feb 13 20:20:11.012 INFO wrote hostname ci-4081.3.1-6-23070f926e to /sysroot/etc/hostname Feb 13 20:20:11.011209 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Feb 13 20:20:11.018410 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:20:11.037670 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:20:11.272753 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:20:11.285897 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:20:11.293093 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:20:11.317570 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:20:11.320970 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:20:11.403571 systemd-networkd[755]: eth0: Gained IPv6LL Feb 13 20:20:11.441604 ignition[909]: INFO : Ignition 2.19.0 Feb 13 20:20:11.441604 ignition[909]: INFO : Stage: mount Feb 13 20:20:11.441604 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:11.441604 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:20:11.455112 ignition[909]: INFO : mount: mount passed Feb 13 20:20:11.455112 ignition[909]: INFO : Ignition finished successfully Feb 13 20:20:11.445114 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:20:11.450665 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:20:11.475070 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:20:11.722176 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:20:11.752924 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (922) Feb 13 20:20:11.759971 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:20:11.760075 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:20:11.775309 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:20:11.788911 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:20:11.793336 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:20:11.831513 ignition[939]: INFO : Ignition 2.19.0 Feb 13 20:20:11.831513 ignition[939]: INFO : Stage: files Feb 13 20:20:11.831513 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:11.831513 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:20:11.831513 ignition[939]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:20:11.836907 ignition[939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:20:11.836907 ignition[939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:20:11.844952 ignition[939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:20:11.844952 ignition[939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:20:11.849212 ignition[939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:20:11.846256 unknown[939]: wrote ssh authorized keys file for user: core Feb 13 20:20:11.855012 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:20:11.855012 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:20:11.855012 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:20:11.855012 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:20:11.855012 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:20:11.855012 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:20:11.855012 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:20:11.855012 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 20:20:11.898077 systemd-networkd[755]: eth1: Gained IPv6LL Feb 13 20:20:12.391327 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 20:20:12.979448 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:20:12.979448 ignition[939]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:20:12.982639 ignition[939]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:20:12.982639 ignition[939]: INFO : files: files passed Feb 13 20:20:12.987347 ignition[939]: INFO : Ignition finished successfully Feb 13 20:20:12.984641 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:20:12.996278 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:20:12.999068 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:20:13.005758 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:20:13.007102 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:20:13.039541 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:20:13.039541 initrd-setup-root-after-ignition[967]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:20:13.047376 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:20:13.050587 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:20:13.053677 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:20:13.065117 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:20:13.106024 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:20:13.106191 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:20:13.110719 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:20:13.111670 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:20:13.117314 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:20:13.125179 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:20:13.150694 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:20:13.157066 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:20:13.187912 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:20:13.190103 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:20:13.191943 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:20:13.194091 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:20:13.194325 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:20:13.196689 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:20:13.198491 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:20:13.200321 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:20:13.202755 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:20:13.203936 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:20:13.205902 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:20:13.207447 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:20:13.209922 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:20:13.218414 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:20:13.219345 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:20:13.220883 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:20:13.221172 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:20:13.223051 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:20:13.223856 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:20:13.225562 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:20:13.229090 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:20:13.230684 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:20:13.230896 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:20:13.233359 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:20:13.233631 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:20:13.235761 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:20:13.235985 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:20:13.237959 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:20:13.238204 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:20:13.251151 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:20:13.258861 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:20:13.261181 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:20:13.261449 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:20:13.270565 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:20:13.270786 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:20:13.289611 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:20:13.289863 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:20:13.297913 ignition[991]: INFO : Ignition 2.19.0 Feb 13 20:20:13.297913 ignition[991]: INFO : Stage: umount Feb 13 20:20:13.300992 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:13.300992 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:20:13.300992 ignition[991]: INFO : umount: umount passed Feb 13 20:20:13.300992 ignition[991]: INFO : Ignition finished successfully Feb 13 20:20:13.320241 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:20:13.320461 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:20:13.483121 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:20:13.484527 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:20:13.484694 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:20:13.488680 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:20:13.488780 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:20:13.491134 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:20:13.491221 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:20:13.493976 systemd[1]: Stopped target network.target - Network. Feb 13 20:20:13.501089 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:20:13.501217 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:20:13.502940 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:20:13.504265 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:20:13.506077 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:20:13.507061 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:20:13.508616 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:20:13.510234 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:20:13.510308 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:20:13.511922 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:20:13.512000 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:20:13.513373 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:20:13.513466 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:20:13.514771 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:20:13.514881 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:20:13.517758 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:20:13.520953 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:20:13.522988 systemd-networkd[755]: eth1: DHCPv6 lease lost Feb 13 20:20:13.526923 systemd-networkd[755]: eth0: DHCPv6 lease lost Feb 13 20:20:13.532011 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:20:13.532254 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:20:13.534913 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:20:13.535073 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:20:13.539212 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:20:13.539405 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:20:13.544945 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:20:13.545011 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:20:13.545930 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:20:13.546018 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:20:13.554198 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:20:13.554959 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:20:13.555197 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:20:13.556881 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:20:13.556978 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:20:13.559971 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:20:13.560088 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:20:13.562928 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:20:13.563014 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:20:13.564904 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:20:13.584759 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:20:13.585021 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:20:13.592273 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:20:13.592448 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:20:13.595391 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:20:13.595470 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:20:13.597167 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:20:13.597295 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:20:13.606399 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:20:13.606593 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:20:13.609023 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:20:13.609163 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:20:13.618308 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:20:13.619211 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:20:13.619375 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:20:13.620875 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:20:13.621001 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:20:13.625089 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:20:13.625216 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:20:13.632162 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:20:13.632300 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:20:13.641068 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:20:13.641310 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:20:13.655746 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:20:13.656194 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:20:13.658179 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:20:13.665844 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:20:13.705028 systemd[1]: Switching root. Feb 13 20:20:13.895587 systemd-journald[183]: Journal stopped Feb 13 20:20:15.954697 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 13 20:20:15.954893 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:20:15.954981 kernel: SELinux: policy capability open_perms=1 Feb 13 20:20:15.955005 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:20:15.955026 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:20:15.955042 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:20:15.955058 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:20:15.955070 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:20:15.955081 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:20:15.955092 kernel: audit: type=1403 audit(1739478014.179:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:20:15.955113 systemd[1]: Successfully loaded SELinux policy in 77.108ms. Feb 13 20:20:15.955139 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 40.891ms. Feb 13 20:20:15.955164 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:20:15.955211 systemd[1]: Detected virtualization kvm. Feb 13 20:20:15.955241 systemd[1]: Detected architecture x86-64. Feb 13 20:20:15.955295 systemd[1]: Detected first boot. Feb 13 20:20:15.955310 systemd[1]: Hostname set to . Feb 13 20:20:15.955322 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:20:15.955335 zram_generator::config[1033]: No configuration found. Feb 13 20:20:15.955359 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:20:15.955389 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:20:15.955406 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:20:15.955418 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:20:15.955431 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:20:15.955444 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:20:15.955457 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:20:15.955469 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:20:15.955481 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:20:15.955494 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:20:15.955506 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:20:15.955525 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:20:15.955537 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:20:15.955549 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:20:15.955562 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:20:15.955573 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:20:15.955585 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:20:15.955596 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:20:15.955608 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:20:15.955620 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:20:15.955635 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:20:15.955654 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:20:15.955677 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:20:15.955696 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:20:15.955716 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:20:15.956298 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:20:15.956333 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:20:15.956352 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:20:15.956373 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:20:15.956393 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:20:15.957972 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:20:15.958082 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:20:15.958098 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:20:15.958111 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:20:15.958139 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:20:15.958169 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:20:15.958199 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:20:15.958229 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:20:15.958247 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:20:15.958268 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:20:15.958285 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:20:15.958304 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:20:15.958321 systemd[1]: Reached target machines.target - Containers. Feb 13 20:20:15.958339 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:20:15.958469 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:20:15.958505 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:20:15.958545 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:20:15.958599 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:20:15.958655 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:20:15.958708 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:20:15.958760 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:20:15.958789 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:20:15.958917 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:20:15.958937 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:20:15.958957 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:20:15.958976 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:20:15.958995 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:20:15.959013 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:20:15.959032 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:20:15.959051 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:20:15.959069 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:20:15.959094 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:20:15.959114 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:20:15.959132 systemd[1]: Stopped verity-setup.service. Feb 13 20:20:15.959153 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:20:15.959172 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:20:15.959261 systemd-journald[1109]: Collecting audit messages is disabled. Feb 13 20:20:15.959323 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:20:15.959354 systemd-journald[1109]: Journal started Feb 13 20:20:15.959395 systemd-journald[1109]: Runtime Journal (/run/log/journal/e7fbc5ecfed041eeb4bbca9cca53143c) is 4.9M, max 39.3M, 34.4M free. Feb 13 20:20:15.410968 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:20:15.444365 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:20:15.445100 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:20:15.961857 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:20:15.967871 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:20:15.978146 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:20:15.979264 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:20:15.981676 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:20:15.984665 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:20:15.987968 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:20:15.989272 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:20:15.991362 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:20:15.991573 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:20:15.993077 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:20:15.993955 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:20:15.996487 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:20:15.998551 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:20:16.015859 kernel: loop: module loaded Feb 13 20:20:16.019140 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:20:16.021113 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:20:16.022033 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:20:16.044166 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:20:16.050890 kernel: fuse: init (API version 7.39) Feb 13 20:20:16.066072 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:20:16.067650 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:20:16.067871 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:20:16.074383 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:20:16.091216 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:20:16.115855 kernel: ACPI: bus type drm_connector registered Feb 13 20:20:16.117922 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:20:16.119218 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:20:16.122271 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:20:16.129065 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:20:16.130239 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:20:16.141430 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:20:16.142475 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:20:16.155364 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:20:16.171212 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:20:16.179506 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:20:16.191507 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:20:16.193789 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:20:16.195234 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:20:16.198143 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:20:16.198476 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:20:16.203618 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:20:16.205590 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:20:16.242541 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:20:16.283369 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:20:16.345576 kernel: loop0: detected capacity change from 0 to 140768 Feb 13 20:20:16.350587 systemd-journald[1109]: Time spent on flushing to /var/log/journal/e7fbc5ecfed041eeb4bbca9cca53143c is 177.245ms for 974 entries. Feb 13 20:20:16.350587 systemd-journald[1109]: System Journal (/var/log/journal/e7fbc5ecfed041eeb4bbca9cca53143c) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:20:16.570835 systemd-journald[1109]: Received client request to flush runtime journal. Feb 13 20:20:16.570933 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:20:16.570961 kernel: loop1: detected capacity change from 0 to 210664 Feb 13 20:20:16.338489 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:20:16.355417 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:20:16.365381 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:20:16.370673 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:20:16.481381 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:20:16.486876 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:20:16.509054 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Feb 13 20:20:16.509076 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Feb 13 20:20:16.536128 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:20:16.550263 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:20:16.556612 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:20:16.568185 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:20:16.575410 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:20:16.611792 kernel: loop2: detected capacity change from 0 to 142488 Feb 13 20:20:16.681515 kernel: loop3: detected capacity change from 0 to 8 Feb 13 20:20:16.681265 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 20:20:16.736789 kernel: loop4: detected capacity change from 0 to 140768 Feb 13 20:20:16.752333 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:20:16.771360 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:20:16.783969 kernel: loop5: detected capacity change from 0 to 210664 Feb 13 20:20:16.803129 kernel: loop6: detected capacity change from 0 to 142488 Feb 13 20:20:16.841868 kernel: loop7: detected capacity change from 0 to 8 Feb 13 20:20:16.846700 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Feb 13 20:20:16.848419 (sd-merge)[1178]: Merged extensions into '/usr'. Feb 13 20:20:16.871123 systemd[1]: Reloading requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:20:16.871169 systemd[1]: Reloading... Feb 13 20:20:16.949592 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Feb 13 20:20:16.949626 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Feb 13 20:20:17.197872 zram_generator::config[1208]: No configuration found. Feb 13 20:20:17.558851 ldconfig[1144]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:20:17.685649 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:20:17.792781 systemd[1]: Reloading finished in 919 ms. Feb 13 20:20:17.827468 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:20:17.829732 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:20:17.832191 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:20:17.857497 systemd[1]: Starting ensure-sysext.service... Feb 13 20:20:17.873324 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:20:17.900559 systemd[1]: Reloading requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:20:17.900595 systemd[1]: Reloading... Feb 13 20:20:17.949841 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:20:17.950224 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:20:17.951447 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:20:17.952675 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Feb 13 20:20:17.952855 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Feb 13 20:20:17.958541 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:20:17.958957 systemd-tmpfiles[1253]: Skipping /boot Feb 13 20:20:17.986049 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:20:17.986254 systemd-tmpfiles[1253]: Skipping /boot Feb 13 20:20:18.103847 zram_generator::config[1286]: No configuration found. Feb 13 20:20:18.302820 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:20:18.377618 systemd[1]: Reloading finished in 476 ms. Feb 13 20:20:18.397173 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:20:18.405193 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:20:18.434267 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:20:18.440393 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:20:18.445238 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:20:18.451200 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:20:18.464683 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:20:18.472311 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:20:18.484069 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:20:18.484429 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:20:18.496582 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:20:18.507709 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:20:18.512358 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:20:18.513726 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:20:18.513942 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:20:18.518846 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:20:18.519191 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:20:18.519442 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:20:18.519534 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:20:18.530620 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:20:18.531238 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:20:18.537653 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:20:18.538924 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:20:18.539132 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:20:18.544598 systemd[1]: Finished ensure-sysext.service. Feb 13 20:20:18.566419 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:20:18.579127 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:20:18.636451 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:20:18.643540 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Feb 13 20:20:18.663552 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:20:18.664424 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:20:18.697742 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:20:18.726575 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:20:18.729793 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:20:18.741430 augenrules[1357]: No rules Feb 13 20:20:18.730543 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:20:18.737046 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:20:18.737790 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:20:18.747758 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:20:18.750728 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:20:18.766419 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:20:18.766959 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:20:18.785504 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:20:18.790956 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:20:18.810266 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:20:18.811021 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:20:18.811167 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:20:18.811221 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:20:18.820409 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:20:19.075664 systemd-networkd[1377]: lo: Link UP Feb 13 20:20:19.076594 systemd-networkd[1377]: lo: Gained carrier Feb 13 20:20:19.078133 systemd-networkd[1377]: Enumeration completed Feb 13 20:20:19.078504 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:20:19.081261 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 20:20:19.088309 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:20:19.108005 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Feb 13 20:20:19.110974 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:20:19.111231 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:20:19.121459 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:20:19.133211 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:20:19.144403 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:20:19.148213 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:20:19.148311 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:20:19.148340 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:20:19.164279 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1385) Feb 13 20:20:19.179939 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:20:19.182167 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:20:19.205473 systemd-resolved[1332]: Positive Trust Anchors: Feb 13 20:20:19.206920 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:20:19.206985 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:20:19.214455 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:20:19.217002 kernel: ISO 9660 Extensions: RRIP_1991A Feb 13 20:20:19.215928 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:20:19.222189 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Feb 13 20:20:19.223574 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:20:19.223779 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:20:19.224565 systemd-resolved[1332]: Using system hostname 'ci-4081.3.1-6-23070f926e'. Feb 13 20:20:19.225106 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:20:19.225339 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:20:19.232416 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:20:19.236180 systemd[1]: Reached target network.target - Network. Feb 13 20:20:19.237407 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:20:19.238266 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:20:19.238348 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:20:19.322183 systemd-networkd[1377]: eth1: Configuring with /run/systemd/network/10-a6:99:49:1d:70:4d.network. Feb 13 20:20:19.327522 systemd-networkd[1377]: eth1: Link UP Feb 13 20:20:19.328409 systemd-networkd[1377]: eth1: Gained carrier Feb 13 20:20:19.338986 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Feb 13 20:20:19.355244 systemd-networkd[1377]: eth0: Configuring with /run/systemd/network/10-ca:f8:ff:e6:04:19.network. Feb 13 20:20:19.357963 systemd-networkd[1377]: eth0: Link UP Feb 13 20:20:19.357979 systemd-networkd[1377]: eth0: Gained carrier Feb 13 20:20:19.378661 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:20:19.388867 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 13 20:20:19.410358 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 20:20:19.387270 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:20:19.431189 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:20:19.436625 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:20:19.462109 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 20:20:19.523987 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:20:19.540898 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Feb 13 20:20:19.544700 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Feb 13 20:20:19.557455 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:20:19.569023 kernel: Console: switching to colour dummy device 80x25 Feb 13 20:20:19.569196 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 20:20:19.569215 kernel: [drm] features: -context_init Feb 13 20:20:19.573126 kernel: [drm] number of scanouts: 1 Feb 13 20:20:19.573260 kernel: [drm] number of cap sets: 0 Feb 13 20:20:19.584853 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Feb 13 20:20:19.604844 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Feb 13 20:20:19.608404 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:20:19.623875 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 20:20:19.635887 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:20:19.636283 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:20:19.650393 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:20:19.663045 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:20:19.663561 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:20:19.675233 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:20:19.798863 kernel: EDAC MC: Ver: 3.0.0 Feb 13 20:20:19.830629 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:20:19.850617 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:20:19.872465 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:20:19.914131 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:20:19.955883 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:20:19.959783 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:20:19.961931 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:20:19.962431 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:20:19.963270 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:20:19.963874 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:20:19.964170 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:20:19.964597 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:20:19.964720 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:20:19.964770 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:20:19.965063 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:20:19.969473 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:20:19.973669 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:20:19.991283 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:20:20.003601 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:20:20.013779 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:20:20.015036 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:20:20.015770 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:20:20.018737 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:20:20.018773 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:20:20.020928 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:20:20.036295 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:20:20.064004 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:20:20.086015 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:20:20.096200 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:20:20.114871 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:20:20.115823 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:20:20.131301 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:20:20.138172 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:20:20.152402 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:20:20.167765 jq[1444]: false Feb 13 20:20:20.168373 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:20:20.171754 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:20:20.175089 coreos-metadata[1440]: Feb 13 20:20:20.174 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:20:20.177418 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:20:20.183185 dbus-daemon[1443]: [system] SELinux support is enabled Feb 13 20:20:20.186313 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:20:20.201226 coreos-metadata[1440]: Feb 13 20:20:20.201 INFO Fetch successful Feb 13 20:20:20.202055 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:20:20.207742 extend-filesystems[1445]: Found loop4 Feb 13 20:20:20.209756 extend-filesystems[1445]: Found loop5 Feb 13 20:20:20.209756 extend-filesystems[1445]: Found loop6 Feb 13 20:20:20.209756 extend-filesystems[1445]: Found loop7 Feb 13 20:20:20.209756 extend-filesystems[1445]: Found vda Feb 13 20:20:20.209756 extend-filesystems[1445]: Found vda1 Feb 13 20:20:20.209756 extend-filesystems[1445]: Found vda2 Feb 13 20:20:20.209756 extend-filesystems[1445]: Found vda3 Feb 13 20:20:20.209756 extend-filesystems[1445]: Found usr Feb 13 20:20:20.209756 extend-filesystems[1445]: Found vda4 Feb 13 20:20:20.209756 extend-filesystems[1445]: Found vda6 Feb 13 20:20:20.209756 extend-filesystems[1445]: Found vda7 Feb 13 20:20:20.209756 extend-filesystems[1445]: Found vda9 Feb 13 20:20:20.209756 extend-filesystems[1445]: Checking size of /dev/vda9 Feb 13 20:20:20.213187 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:20:20.228289 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:20:20.264612 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:20:20.266753 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:20:20.267651 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:20:20.270481 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:20:20.312930 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:20:20.313051 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:20:20.318334 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:20:20.318496 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Feb 13 20:20:20.318535 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:20:20.336498 extend-filesystems[1445]: Resized partition /dev/vda9 Feb 13 20:20:20.373919 jq[1452]: true Feb 13 20:20:20.374153 extend-filesystems[1469]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:20:20.407089 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Feb 13 20:20:20.407244 update_engine[1451]: I20250213 20:20:20.395522 1451 main.cc:92] Flatcar Update Engine starting Feb 13 20:20:20.411388 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:20:20.413649 update_engine[1451]: I20250213 20:20:20.412739 1451 update_check_scheduler.cc:74] Next update check in 9m53s Feb 13 20:20:20.427941 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1372) Feb 13 20:20:20.433641 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:20:20.450552 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:20:20.452149 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:20:20.474129 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:20:20.484522 (ntainerd)[1477]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:20:20.493612 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:20:20.521289 jq[1473]: true Feb 13 20:20:20.538231 systemd-networkd[1377]: eth1: Gained IPv6LL Feb 13 20:20:20.559247 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:20:20.561901 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:20:20.572349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:20:20.580340 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:20:20.703728 systemd-logind[1449]: New seat seat0. Feb 13 20:20:20.710099 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 20:20:20.710965 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:20:20.711417 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:20:20.728847 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 20:20:20.779784 extend-filesystems[1469]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:20:20.779784 extend-filesystems[1469]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 20:20:20.779784 extend-filesystems[1469]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 20:20:20.807193 extend-filesystems[1445]: Resized filesystem in /dev/vda9 Feb 13 20:20:20.807193 extend-filesystems[1445]: Found vdb Feb 13 20:20:20.785490 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:20:20.787600 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:20:20.863585 systemd-networkd[1377]: eth0: Gained IPv6LL Feb 13 20:20:20.875247 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:20:20.941992 bash[1510]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:20:20.945232 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:20:20.960576 systemd[1]: Starting sshkeys.service... Feb 13 20:20:21.057926 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:20:21.087088 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:20:21.248589 coreos-metadata[1521]: Feb 13 20:20:21.246 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:20:21.286242 coreos-metadata[1521]: Feb 13 20:20:21.280 INFO Fetch successful Feb 13 20:20:21.313859 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:20:21.337355 unknown[1521]: wrote ssh authorized keys file for user: core Feb 13 20:20:21.474085 update-ssh-keys[1528]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:20:21.479995 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:20:21.494948 systemd[1]: Finished sshkeys.service. Feb 13 20:20:21.602961 containerd[1477]: time="2025-02-13T20:20:21.600507249Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:20:21.759233 containerd[1477]: time="2025-02-13T20:20:21.754246267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:20:21.768641 containerd[1477]: time="2025-02-13T20:20:21.768542907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:20:21.768983 containerd[1477]: time="2025-02-13T20:20:21.768952050Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:20:21.772492 containerd[1477]: time="2025-02-13T20:20:21.769198019Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:20:21.772492 containerd[1477]: time="2025-02-13T20:20:21.769547565Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:20:21.772492 containerd[1477]: time="2025-02-13T20:20:21.769578469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:20:21.772492 containerd[1477]: time="2025-02-13T20:20:21.769686277Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:20:21.772492 containerd[1477]: time="2025-02-13T20:20:21.769707932Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:20:21.774046 containerd[1477]: time="2025-02-13T20:20:21.773974269Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:20:21.774861 containerd[1477]: time="2025-02-13T20:20:21.774814426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:20:21.775008 containerd[1477]: time="2025-02-13T20:20:21.774987357Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:20:21.775070 containerd[1477]: time="2025-02-13T20:20:21.775056009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:20:21.775371 containerd[1477]: time="2025-02-13T20:20:21.775347324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:20:21.777236 containerd[1477]: time="2025-02-13T20:20:21.776366714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:20:21.777711 containerd[1477]: time="2025-02-13T20:20:21.777660460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:20:21.778220 containerd[1477]: time="2025-02-13T20:20:21.778188890Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:20:21.778525 containerd[1477]: time="2025-02-13T20:20:21.778498855Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:20:21.779138 containerd[1477]: time="2025-02-13T20:20:21.779107847Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:20:21.841781 containerd[1477]: time="2025-02-13T20:20:21.841687078Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:20:21.844996 containerd[1477]: time="2025-02-13T20:20:21.842351618Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:20:21.844996 containerd[1477]: time="2025-02-13T20:20:21.844031922Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:20:21.844996 containerd[1477]: time="2025-02-13T20:20:21.844082829Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:20:21.844996 containerd[1477]: time="2025-02-13T20:20:21.844138179Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:20:21.844996 containerd[1477]: time="2025-02-13T20:20:21.844603550Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:20:21.848262 containerd[1477]: time="2025-02-13T20:20:21.847367942Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:20:21.848262 containerd[1477]: time="2025-02-13T20:20:21.848004198Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:20:21.848262 containerd[1477]: time="2025-02-13T20:20:21.848052519Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:20:21.848262 containerd[1477]: time="2025-02-13T20:20:21.848169485Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:20:21.848262 containerd[1477]: time="2025-02-13T20:20:21.848198203Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:20:21.852210 containerd[1477]: time="2025-02-13T20:20:21.848222402Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:20:21.852210 containerd[1477]: time="2025-02-13T20:20:21.849678776Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:20:21.852210 containerd[1477]: time="2025-02-13T20:20:21.849744246Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:20:21.852210 containerd[1477]: time="2025-02-13T20:20:21.849772145Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:20:21.852210 containerd[1477]: time="2025-02-13T20:20:21.849843419Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:20:21.852210 containerd[1477]: time="2025-02-13T20:20:21.851898752Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:20:21.852210 containerd[1477]: time="2025-02-13T20:20:21.851974164Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:20:21.852210 containerd[1477]: time="2025-02-13T20:20:21.852038311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:20:21.852210 containerd[1477]: time="2025-02-13T20:20:21.852074687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:20:21.852210 containerd[1477]: time="2025-02-13T20:20:21.852113151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:20:21.852210 containerd[1477]: time="2025-02-13T20:20:21.852137738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:20:21.854131 containerd[1477]: time="2025-02-13T20:20:21.853123336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:20:21.854131 containerd[1477]: time="2025-02-13T20:20:21.853193436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:20:21.854131 containerd[1477]: time="2025-02-13T20:20:21.853218969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:20:21.854622 containerd[1477]: time="2025-02-13T20:20:21.854062009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:20:21.854622 containerd[1477]: time="2025-02-13T20:20:21.854289304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:20:21.854622 containerd[1477]: time="2025-02-13T20:20:21.854319003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:20:21.854622 containerd[1477]: time="2025-02-13T20:20:21.854352810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:20:21.854622 containerd[1477]: time="2025-02-13T20:20:21.854391459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:20:21.854622 containerd[1477]: time="2025-02-13T20:20:21.854413829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:20:21.855943 containerd[1477]: time="2025-02-13T20:20:21.854844831Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:20:21.855943 containerd[1477]: time="2025-02-13T20:20:21.854921882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:20:21.855943 containerd[1477]: time="2025-02-13T20:20:21.854948298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:20:21.855943 containerd[1477]: time="2025-02-13T20:20:21.854968795Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:20:21.855943 containerd[1477]: time="2025-02-13T20:20:21.855685167Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:20:21.855943 containerd[1477]: time="2025-02-13T20:20:21.855730439Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:20:21.855943 containerd[1477]: time="2025-02-13T20:20:21.855752931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:20:21.859146 containerd[1477]: time="2025-02-13T20:20:21.856507621Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:20:21.859146 containerd[1477]: time="2025-02-13T20:20:21.858069337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:20:21.859146 containerd[1477]: time="2025-02-13T20:20:21.858159257Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:20:21.859146 containerd[1477]: time="2025-02-13T20:20:21.858197648Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:20:21.859146 containerd[1477]: time="2025-02-13T20:20:21.858216473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:20:21.861465 containerd[1477]: time="2025-02-13T20:20:21.859676981Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:20:21.861465 containerd[1477]: time="2025-02-13T20:20:21.859844797Z" level=info msg="Connect containerd service" Feb 13 20:20:21.861465 containerd[1477]: time="2025-02-13T20:20:21.859929347Z" level=info msg="using legacy CRI server" Feb 13 20:20:21.861465 containerd[1477]: time="2025-02-13T20:20:21.859942656Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:20:21.861465 containerd[1477]: time="2025-02-13T20:20:21.860162490Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:20:21.866856 containerd[1477]: time="2025-02-13T20:20:21.865107632Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:20:21.867218 containerd[1477]: time="2025-02-13T20:20:21.867140063Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:20:21.872242 containerd[1477]: time="2025-02-13T20:20:21.867318250Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:20:21.872242 containerd[1477]: time="2025-02-13T20:20:21.867142375Z" level=info msg="Start subscribing containerd event" Feb 13 20:20:21.872242 containerd[1477]: time="2025-02-13T20:20:21.867441722Z" level=info msg="Start recovering state" Feb 13 20:20:21.872242 containerd[1477]: time="2025-02-13T20:20:21.867556742Z" level=info msg="Start event monitor" Feb 13 20:20:21.872242 containerd[1477]: time="2025-02-13T20:20:21.867573295Z" level=info msg="Start snapshots syncer" Feb 13 20:20:21.872242 containerd[1477]: time="2025-02-13T20:20:21.867588650Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:20:21.872242 containerd[1477]: time="2025-02-13T20:20:21.867604602Z" level=info msg="Start streaming server" Feb 13 20:20:21.872242 containerd[1477]: time="2025-02-13T20:20:21.867906249Z" level=info msg="containerd successfully booted in 0.273966s" Feb 13 20:20:21.868302 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:20:21.958159 sshd_keygen[1480]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:20:22.036627 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:20:22.060059 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:20:22.091383 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:20:22.091838 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:20:22.119017 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:20:22.172072 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:20:22.187641 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:20:22.204419 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:20:22.207976 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:20:23.186537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:20:23.190422 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:20:23.194373 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:20:23.215364 systemd[1]: Startup finished in 2.382s (kernel) + 9.213s (initrd) + 9.109s (userspace) = 20.705s. Feb 13 20:20:24.633409 kubelet[1556]: E0213 20:20:24.633325 1556 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:20:24.643629 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:20:24.644541 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:20:24.645469 systemd[1]: kubelet.service: Consumed 1.765s CPU time. Feb 13 20:20:25.928153 systemd-resolved[1332]: Clock change detected. Flushing caches. Feb 13 20:20:25.929684 systemd-timesyncd[1346]: Contacted time server 15.204.87.223:123 (1.flatcar.pool.ntp.org). Feb 13 20:20:25.929892 systemd-timesyncd[1346]: Initial clock synchronization to Thu 2025-02-13 20:20:25.926625 UTC. Feb 13 20:20:30.009814 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:20:30.020133 systemd[1]: Started sshd@0-64.23.133.95:22-147.75.109.163:37404.service - OpenSSH per-connection server daemon (147.75.109.163:37404). Feb 13 20:20:30.145091 sshd[1570]: Accepted publickey for core from 147.75.109.163 port 37404 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:20:30.154650 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:30.190118 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:20:30.206634 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:20:30.213979 systemd-logind[1449]: New session 1 of user core. Feb 13 20:20:30.235707 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:20:30.252460 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:20:30.274324 (systemd)[1574]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:20:30.493223 systemd[1574]: Queued start job for default target default.target. Feb 13 20:20:30.506116 systemd[1574]: Created slice app.slice - User Application Slice. Feb 13 20:20:30.506237 systemd[1574]: Reached target paths.target - Paths. Feb 13 20:20:30.506291 systemd[1574]: Reached target timers.target - Timers. Feb 13 20:20:30.509462 systemd[1574]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:20:30.567626 systemd[1574]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:20:30.567910 systemd[1574]: Reached target sockets.target - Sockets. Feb 13 20:20:30.567938 systemd[1574]: Reached target basic.target - Basic System. Feb 13 20:20:30.568022 systemd[1574]: Reached target default.target - Main User Target. Feb 13 20:20:30.568074 systemd[1574]: Startup finished in 269ms. Feb 13 20:20:30.568442 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:20:30.589951 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:20:30.677719 systemd[1]: Started sshd@1-64.23.133.95:22-147.75.109.163:37410.service - OpenSSH per-connection server daemon (147.75.109.163:37410). Feb 13 20:20:30.780693 sshd[1586]: Accepted publickey for core from 147.75.109.163 port 37410 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:20:30.783957 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:30.799568 systemd-logind[1449]: New session 2 of user core. Feb 13 20:20:30.811582 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:20:30.899517 sshd[1586]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:30.912945 systemd[1]: sshd@1-64.23.133.95:22-147.75.109.163:37410.service: Deactivated successfully. Feb 13 20:20:30.916139 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:20:30.920140 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:20:30.926884 systemd[1]: Started sshd@2-64.23.133.95:22-147.75.109.163:37420.service - OpenSSH per-connection server daemon (147.75.109.163:37420). Feb 13 20:20:30.930186 systemd-logind[1449]: Removed session 2. Feb 13 20:20:31.002028 sshd[1593]: Accepted publickey for core from 147.75.109.163 port 37420 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:20:31.006275 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:31.018985 systemd-logind[1449]: New session 3 of user core. Feb 13 20:20:31.025701 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:20:31.106587 sshd[1593]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:31.129134 systemd[1]: sshd@2-64.23.133.95:22-147.75.109.163:37420.service: Deactivated successfully. Feb 13 20:20:31.135459 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:20:31.139225 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:20:31.147943 systemd[1]: Started sshd@3-64.23.133.95:22-147.75.109.163:37428.service - OpenSSH per-connection server daemon (147.75.109.163:37428). Feb 13 20:20:31.152026 systemd-logind[1449]: Removed session 3. Feb 13 20:20:31.263944 sshd[1600]: Accepted publickey for core from 147.75.109.163 port 37428 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:20:31.267775 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:31.278000 systemd-logind[1449]: New session 4 of user core. Feb 13 20:20:31.287441 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:20:31.373406 sshd[1600]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:31.387404 systemd[1]: sshd@3-64.23.133.95:22-147.75.109.163:37428.service: Deactivated successfully. Feb 13 20:20:31.397259 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:20:31.403593 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:20:31.409473 systemd[1]: Started sshd@4-64.23.133.95:22-147.75.109.163:37440.service - OpenSSH per-connection server daemon (147.75.109.163:37440). Feb 13 20:20:31.420875 systemd-logind[1449]: Removed session 4. Feb 13 20:20:31.482326 sshd[1607]: Accepted publickey for core from 147.75.109.163 port 37440 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:20:31.486958 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:31.500988 systemd-logind[1449]: New session 5 of user core. Feb 13 20:20:31.513387 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:20:31.616287 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:20:31.617325 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:20:31.648925 sudo[1610]: pam_unix(sudo:session): session closed for user root Feb 13 20:20:31.658976 sshd[1607]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:31.678275 systemd[1]: Started sshd@5-64.23.133.95:22-147.75.109.163:37450.service - OpenSSH per-connection server daemon (147.75.109.163:37450). Feb 13 20:20:31.680760 systemd[1]: sshd@4-64.23.133.95:22-147.75.109.163:37440.service: Deactivated successfully. Feb 13 20:20:31.689301 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:20:31.695187 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:20:31.699095 systemd-logind[1449]: Removed session 5. Feb 13 20:20:31.741603 sshd[1613]: Accepted publickey for core from 147.75.109.163 port 37450 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:20:31.746108 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:31.756635 systemd-logind[1449]: New session 6 of user core. Feb 13 20:20:31.768127 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:20:31.842580 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:20:31.843330 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:20:31.859101 sudo[1619]: pam_unix(sudo:session): session closed for user root Feb 13 20:20:31.877439 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:20:31.878552 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:20:31.904078 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:20:31.920966 auditctl[1622]: No rules Feb 13 20:20:31.921661 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:20:31.921993 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:20:31.938558 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:20:31.998167 augenrules[1640]: No rules Feb 13 20:20:32.001194 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:20:32.003216 sudo[1618]: pam_unix(sudo:session): session closed for user root Feb 13 20:20:32.008981 sshd[1613]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:32.026615 systemd[1]: sshd@5-64.23.133.95:22-147.75.109.163:37450.service: Deactivated successfully. Feb 13 20:20:32.031741 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:20:32.036676 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:20:32.058635 systemd[1]: Started sshd@6-64.23.133.95:22-147.75.109.163:37466.service - OpenSSH per-connection server daemon (147.75.109.163:37466). Feb 13 20:20:32.061981 systemd-logind[1449]: Removed session 6. Feb 13 20:20:32.118015 sshd[1648]: Accepted publickey for core from 147.75.109.163 port 37466 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:20:32.122587 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:32.133298 systemd-logind[1449]: New session 7 of user core. Feb 13 20:20:32.145942 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:20:32.218603 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:20:32.220266 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:20:33.509461 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:20:33.511240 systemd[1]: kubelet.service: Consumed 1.765s CPU time. Feb 13 20:20:33.522527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:20:33.586680 systemd[1]: Reloading requested from client PID 1689 ('systemctl') (unit session-7.scope)... Feb 13 20:20:33.586761 systemd[1]: Reloading... Feb 13 20:20:33.810841 zram_generator::config[1730]: No configuration found. Feb 13 20:20:34.031699 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:20:34.163331 systemd[1]: Reloading finished in 575 ms. Feb 13 20:20:34.268294 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:20:34.268476 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:20:34.269149 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:20:34.283742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:20:34.568973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:20:34.596218 (kubelet)[1780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:20:34.703988 kubelet[1780]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:20:34.703988 kubelet[1780]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:20:34.703988 kubelet[1780]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:20:34.703988 kubelet[1780]: I0213 20:20:34.703522 1780 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:20:35.755774 kubelet[1780]: I0213 20:20:35.755678 1780 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:20:35.755774 kubelet[1780]: I0213 20:20:35.755753 1780 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:20:35.756817 kubelet[1780]: I0213 20:20:35.756199 1780 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:20:35.793012 kubelet[1780]: I0213 20:20:35.791807 1780 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:20:35.819984 kubelet[1780]: I0213 20:20:35.818530 1780 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:20:35.819984 kubelet[1780]: I0213 20:20:35.818897 1780 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:20:35.819984 kubelet[1780]: I0213 20:20:35.818946 1780 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"64.23.133.95","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:20:35.819984 kubelet[1780]: I0213 20:20:35.819286 1780 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:20:35.820497 kubelet[1780]: I0213 20:20:35.819301 1780 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:20:35.820497 kubelet[1780]: I0213 20:20:35.819584 1780 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:20:35.822426 kubelet[1780]: I0213 20:20:35.820800 1780 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:20:35.822426 kubelet[1780]: I0213 20:20:35.820915 1780 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:20:35.822426 kubelet[1780]: I0213 20:20:35.820958 1780 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:20:35.822426 kubelet[1780]: I0213 20:20:35.820985 1780 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:20:35.822426 kubelet[1780]: E0213 20:20:35.822085 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:35.822426 kubelet[1780]: E0213 20:20:35.822142 1780 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:35.827935 kubelet[1780]: I0213 20:20:35.827895 1780 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:20:35.831069 kubelet[1780]: I0213 20:20:35.831004 1780 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:20:35.831613 kubelet[1780]: W0213 20:20:35.831591 1780 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:20:35.833580 kubelet[1780]: I0213 20:20:35.833539 1780 server.go:1264] "Started kubelet" Feb 13 20:20:35.836701 kubelet[1780]: I0213 20:20:35.836630 1780 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:20:35.844968 kubelet[1780]: I0213 20:20:35.844882 1780 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:20:35.847135 kubelet[1780]: I0213 20:20:35.846671 1780 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:20:35.848348 kubelet[1780]: I0213 20:20:35.848226 1780 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:20:35.848758 kubelet[1780]: I0213 20:20:35.848643 1780 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:20:35.852217 kubelet[1780]: I0213 20:20:35.852172 1780 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:20:35.854533 kubelet[1780]: I0213 20:20:35.853997 1780 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:20:35.854696 kubelet[1780]: I0213 20:20:35.854611 1780 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:20:35.861084 kubelet[1780]: E0213 20:20:35.860919 1780 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:20:35.864839 kubelet[1780]: I0213 20:20:35.864138 1780 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:20:35.864839 kubelet[1780]: I0213 20:20:35.864162 1780 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:20:35.864839 kubelet[1780]: I0213 20:20:35.864353 1780 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:20:35.867418 kubelet[1780]: E0213 20:20:35.867113 1780 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{64.23.133.95.1823de0eb472cf0d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:64.23.133.95,UID:64.23.133.95,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:64.23.133.95,},FirstTimestamp:2025-02-13 20:20:35.833474829 +0000 UTC m=+1.230933365,LastTimestamp:2025-02-13 20:20:35.833474829 +0000 UTC m=+1.230933365,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:64.23.133.95,}" Feb 13 20:20:35.869887 kubelet[1780]: W0213 20:20:35.867927 1780 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "64.23.133.95" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 20:20:35.869887 kubelet[1780]: E0213 20:20:35.868002 1780 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "64.23.133.95" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 20:20:35.869887 kubelet[1780]: W0213 20:20:35.868071 1780 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 20:20:35.869887 kubelet[1780]: E0213 20:20:35.868089 1780 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 20:20:35.877488 kubelet[1780]: E0213 20:20:35.877392 1780 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"64.23.133.95\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 20:20:35.878870 kubelet[1780]: W0213 20:20:35.877823 1780 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 20:20:35.878870 kubelet[1780]: E0213 20:20:35.877906 1780 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 20:20:35.900016 kubelet[1780]: E0213 20:20:35.896754 1780 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{64.23.133.95.1823de0eb6152a35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:64.23.133.95,UID:64.23.133.95,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:64.23.133.95,},FirstTimestamp:2025-02-13 20:20:35.860892213 +0000 UTC m=+1.258350741,LastTimestamp:2025-02-13 20:20:35.860892213 +0000 UTC m=+1.258350741,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:64.23.133.95,}" Feb 13 20:20:35.909406 kubelet[1780]: I0213 20:20:35.909302 1780 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:20:35.909406 kubelet[1780]: I0213 20:20:35.909355 1780 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:20:35.909406 kubelet[1780]: I0213 20:20:35.909390 1780 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:20:35.919778 kubelet[1780]: I0213 20:20:35.919717 1780 policy_none.go:49] "None policy: Start" Feb 13 20:20:35.922510 kubelet[1780]: I0213 20:20:35.922470 1780 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:20:35.922775 kubelet[1780]: I0213 20:20:35.922762 1780 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:20:35.949493 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:20:35.958802 kubelet[1780]: I0213 20:20:35.958722 1780 kubelet_node_status.go:73] "Attempting to register node" node="64.23.133.95" Feb 13 20:20:35.971350 kubelet[1780]: I0213 20:20:35.969379 1780 kubelet_node_status.go:76] "Successfully registered node" node="64.23.133.95" Feb 13 20:20:35.978260 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:20:35.990560 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:20:36.016551 kubelet[1780]: E0213 20:20:36.016320 1780 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"64.23.133.95\" not found" Feb 13 20:20:36.023148 kubelet[1780]: I0213 20:20:36.023089 1780 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:20:36.025616 kubelet[1780]: I0213 20:20:36.024408 1780 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:20:36.029510 kubelet[1780]: I0213 20:20:36.028147 1780 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:20:36.029510 kubelet[1780]: I0213 20:20:36.029363 1780 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:20:36.032465 kubelet[1780]: E0213 20:20:36.031947 1780 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"64.23.133.95\" not found" Feb 13 20:20:36.032465 kubelet[1780]: I0213 20:20:36.032440 1780 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:20:36.032727 kubelet[1780]: I0213 20:20:36.032505 1780 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:20:36.032727 kubelet[1780]: I0213 20:20:36.032558 1780 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:20:36.032727 kubelet[1780]: E0213 20:20:36.032614 1780 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 20:20:36.120915 kubelet[1780]: E0213 20:20:36.120732 1780 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"64.23.133.95\" not found" Feb 13 20:20:36.185130 sudo[1651]: pam_unix(sudo:session): session closed for user root Feb 13 20:20:36.190191 sshd[1648]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:36.196715 systemd[1]: sshd@6-64.23.133.95:22-147.75.109.163:37466.service: Deactivated successfully. Feb 13 20:20:36.199556 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:20:36.201366 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:20:36.203061 systemd-logind[1449]: Removed session 7. Feb 13 20:20:36.230947 kubelet[1780]: E0213 20:20:36.230782 1780 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"64.23.133.95\" not found" Feb 13 20:20:36.331879 kubelet[1780]: E0213 20:20:36.331749 1780 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"64.23.133.95\" not found" Feb 13 20:20:36.432119 kubelet[1780]: E0213 20:20:36.432018 1780 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"64.23.133.95\" not found" Feb 13 20:20:36.533238 kubelet[1780]: E0213 20:20:36.533151 1780 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"64.23.133.95\" not found" Feb 13 20:20:36.634540 kubelet[1780]: E0213 20:20:36.634275 1780 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"64.23.133.95\" not found" Feb 13 20:20:36.735549 kubelet[1780]: E0213 20:20:36.735424 1780 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"64.23.133.95\" not found" Feb 13 20:20:36.769026 kubelet[1780]: I0213 20:20:36.768784 1780 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 20:20:36.770156 kubelet[1780]: W0213 20:20:36.770070 1780 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 20:20:36.823040 kubelet[1780]: E0213 20:20:36.822973 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:36.844491 kubelet[1780]: E0213 20:20:36.843886 1780 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"64.23.133.95\" not found" Feb 13 20:20:36.945788 kubelet[1780]: I0213 20:20:36.945587 1780 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 20:20:36.946816 containerd[1477]: time="2025-02-13T20:20:36.946090083Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:20:36.947620 kubelet[1780]: I0213 20:20:36.946549 1780 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 20:20:37.824635 kubelet[1780]: I0213 20:20:37.824500 1780 apiserver.go:52] "Watching apiserver" Feb 13 20:20:37.825395 kubelet[1780]: E0213 20:20:37.824975 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:37.851050 kubelet[1780]: I0213 20:20:37.848015 1780 topology_manager.go:215] "Topology Admit Handler" podUID="25577796-c71f-47e3-bc93-42cc57d164d9" podNamespace="calico-system" podName="calico-node-kx66g" Feb 13 20:20:37.851050 kubelet[1780]: I0213 20:20:37.848224 1780 topology_manager.go:215] "Topology Admit Handler" podUID="ce22ba38-b4f8-4031-88e9-0196a2ef8f62" podNamespace="calico-system" podName="csi-node-driver-hvn65" Feb 13 20:20:37.851050 kubelet[1780]: I0213 20:20:37.848353 1780 topology_manager.go:215] "Topology Admit Handler" podUID="294a27f9-50f6-430b-a312-c48e7faed34c" podNamespace="kube-system" podName="kube-proxy-dt555" Feb 13 20:20:37.851050 kubelet[1780]: E0213 20:20:37.849589 1780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvn65" podUID="ce22ba38-b4f8-4031-88e9-0196a2ef8f62" Feb 13 20:20:37.855473 kubelet[1780]: I0213 20:20:37.855404 1780 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:20:37.863098 systemd[1]: Created slice kubepods-besteffort-pod294a27f9_50f6_430b_a312_c48e7faed34c.slice - libcontainer container kubepods-besteffort-pod294a27f9_50f6_430b_a312_c48e7faed34c.slice. Feb 13 20:20:37.865954 kubelet[1780]: I0213 20:20:37.865833 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce22ba38-b4f8-4031-88e9-0196a2ef8f62-kubelet-dir\") pod \"csi-node-driver-hvn65\" (UID: \"ce22ba38-b4f8-4031-88e9-0196a2ef8f62\") " pod="calico-system/csi-node-driver-hvn65" Feb 13 20:20:37.865954 kubelet[1780]: I0213 20:20:37.865913 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ce22ba38-b4f8-4031-88e9-0196a2ef8f62-socket-dir\") pod \"csi-node-driver-hvn65\" (UID: \"ce22ba38-b4f8-4031-88e9-0196a2ef8f62\") " pod="calico-system/csi-node-driver-hvn65" Feb 13 20:20:37.865954 kubelet[1780]: I0213 20:20:37.865941 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/294a27f9-50f6-430b-a312-c48e7faed34c-xtables-lock\") pod \"kube-proxy-dt555\" (UID: \"294a27f9-50f6-430b-a312-c48e7faed34c\") " pod="kube-system/kube-proxy-dt555" Feb 13 20:20:37.865954 kubelet[1780]: I0213 20:20:37.865969 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25577796-c71f-47e3-bc93-42cc57d164d9-tigera-ca-bundle\") pod \"calico-node-kx66g\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " pod="calico-system/calico-node-kx66g" Feb 13 20:20:37.866351 kubelet[1780]: I0213 20:20:37.865996 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/25577796-c71f-47e3-bc93-42cc57d164d9-node-certs\") pod \"calico-node-kx66g\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " pod="calico-system/calico-node-kx66g" Feb 13 20:20:37.866351 kubelet[1780]: I0213 20:20:37.866019 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-cni-bin-dir\") pod \"calico-node-kx66g\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " pod="calico-system/calico-node-kx66g" Feb 13 20:20:37.866351 kubelet[1780]: I0213 20:20:37.866040 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ce22ba38-b4f8-4031-88e9-0196a2ef8f62-varrun\") pod \"csi-node-driver-hvn65\" (UID: \"ce22ba38-b4f8-4031-88e9-0196a2ef8f62\") " pod="calico-system/csi-node-driver-hvn65" Feb 13 20:20:37.866351 kubelet[1780]: I0213 20:20:37.866062 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ce22ba38-b4f8-4031-88e9-0196a2ef8f62-registration-dir\") pod \"csi-node-driver-hvn65\" (UID: \"ce22ba38-b4f8-4031-88e9-0196a2ef8f62\") " pod="calico-system/csi-node-driver-hvn65" Feb 13 20:20:37.866351 kubelet[1780]: I0213 20:20:37.866088 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qblr\" (UniqueName: \"kubernetes.io/projected/ce22ba38-b4f8-4031-88e9-0196a2ef8f62-kube-api-access-5qblr\") pod \"csi-node-driver-hvn65\" (UID: \"ce22ba38-b4f8-4031-88e9-0196a2ef8f62\") " pod="calico-system/csi-node-driver-hvn65" Feb 13 20:20:37.866466 kubelet[1780]: I0213 20:20:37.866109 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-xtables-lock\") pod \"calico-node-kx66g\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " pod="calico-system/calico-node-kx66g" Feb 13 20:20:37.866466 kubelet[1780]: I0213 20:20:37.866146 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-policysync\") pod \"calico-node-kx66g\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " pod="calico-system/calico-node-kx66g" Feb 13 20:20:37.866466 kubelet[1780]: I0213 20:20:37.866169 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-var-lib-calico\") pod \"calico-node-kx66g\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " pod="calico-system/calico-node-kx66g" Feb 13 20:20:37.866466 kubelet[1780]: I0213 20:20:37.866191 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-cni-net-dir\") pod \"calico-node-kx66g\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " pod="calico-system/calico-node-kx66g" Feb 13 20:20:37.866466 kubelet[1780]: I0213 20:20:37.866215 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns5m6\" (UniqueName: \"kubernetes.io/projected/25577796-c71f-47e3-bc93-42cc57d164d9-kube-api-access-ns5m6\") pod \"calico-node-kx66g\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " pod="calico-system/calico-node-kx66g" Feb 13 20:20:37.866597 kubelet[1780]: I0213 20:20:37.866235 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/294a27f9-50f6-430b-a312-c48e7faed34c-lib-modules\") pod \"kube-proxy-dt555\" (UID: \"294a27f9-50f6-430b-a312-c48e7faed34c\") " pod="kube-system/kube-proxy-dt555" Feb 13 20:20:37.866597 kubelet[1780]: I0213 20:20:37.866281 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-lib-modules\") pod \"calico-node-kx66g\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " pod="calico-system/calico-node-kx66g" Feb 13 20:20:37.866597 kubelet[1780]: I0213 20:20:37.866302 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-var-run-calico\") pod \"calico-node-kx66g\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " pod="calico-system/calico-node-kx66g" Feb 13 20:20:37.866597 kubelet[1780]: I0213 20:20:37.866465 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-cni-log-dir\") pod \"calico-node-kx66g\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " pod="calico-system/calico-node-kx66g" Feb 13 20:20:37.866597 kubelet[1780]: I0213 20:20:37.866498 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-flexvol-driver-host\") pod \"calico-node-kx66g\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " pod="calico-system/calico-node-kx66g" Feb 13 20:20:37.866772 kubelet[1780]: I0213 20:20:37.866526 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/294a27f9-50f6-430b-a312-c48e7faed34c-kube-proxy\") pod \"kube-proxy-dt555\" (UID: \"294a27f9-50f6-430b-a312-c48e7faed34c\") " pod="kube-system/kube-proxy-dt555" Feb 13 20:20:37.866772 kubelet[1780]: I0213 20:20:37.866550 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r92s\" (UniqueName: \"kubernetes.io/projected/294a27f9-50f6-430b-a312-c48e7faed34c-kube-api-access-6r92s\") pod \"kube-proxy-dt555\" (UID: \"294a27f9-50f6-430b-a312-c48e7faed34c\") " pod="kube-system/kube-proxy-dt555" Feb 13 20:20:37.906606 systemd[1]: Created slice kubepods-besteffort-pod25577796_c71f_47e3_bc93_42cc57d164d9.slice - libcontainer container kubepods-besteffort-pod25577796_c71f_47e3_bc93_42cc57d164d9.slice. Feb 13 20:20:37.979926 kubelet[1780]: E0213 20:20:37.979670 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:37.979926 kubelet[1780]: W0213 20:20:37.979709 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:37.979926 kubelet[1780]: E0213 20:20:37.979750 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:37.980707 kubelet[1780]: E0213 20:20:37.980498 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:37.980707 kubelet[1780]: W0213 20:20:37.980524 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:37.980707 kubelet[1780]: E0213 20:20:37.980549 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:37.981380 kubelet[1780]: E0213 20:20:37.981099 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:37.981380 kubelet[1780]: W0213 20:20:37.981120 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:37.981380 kubelet[1780]: E0213 20:20:37.981169 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:37.981773 kubelet[1780]: E0213 20:20:37.981752 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:37.981928 kubelet[1780]: W0213 20:20:37.981912 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:37.982024 kubelet[1780]: E0213 20:20:37.982008 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:37.982433 kubelet[1780]: E0213 20:20:37.982409 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:37.982830 kubelet[1780]: W0213 20:20:37.982594 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:37.982830 kubelet[1780]: E0213 20:20:37.982639 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:37.983350 kubelet[1780]: E0213 20:20:37.983329 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:37.983444 kubelet[1780]: W0213 20:20:37.983428 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:37.983538 kubelet[1780]: E0213 20:20:37.983503 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:37.984123 kubelet[1780]: E0213 20:20:37.983921 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:37.984123 kubelet[1780]: W0213 20:20:37.983938 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:37.984123 kubelet[1780]: E0213 20:20:37.983954 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:37.985039 kubelet[1780]: E0213 20:20:37.984786 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:37.985039 kubelet[1780]: W0213 20:20:37.984805 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:37.985039 kubelet[1780]: E0213 20:20:37.984823 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:37.985404 kubelet[1780]: E0213 20:20:37.985384 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:37.985681 kubelet[1780]: W0213 20:20:37.985509 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:37.985681 kubelet[1780]: E0213 20:20:37.985536 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:37.985991 kubelet[1780]: E0213 20:20:37.985975 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:37.986354 kubelet[1780]: W0213 20:20:37.986214 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:37.986354 kubelet[1780]: E0213 20:20:37.986247 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:37.987002 kubelet[1780]: E0213 20:20:37.986627 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:37.987002 kubelet[1780]: W0213 20:20:37.986644 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:37.987002 kubelet[1780]: E0213 20:20:37.986660 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:37.987407 kubelet[1780]: E0213 20:20:37.987385 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:37.987873 kubelet[1780]: W0213 20:20:37.987489 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:37.987873 kubelet[1780]: E0213 20:20:37.987510 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:37.993455 kubelet[1780]: E0213 20:20:37.993376 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:37.993455 kubelet[1780]: W0213 20:20:37.993438 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:37.993660 kubelet[1780]: E0213 20:20:37.993476 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:37.995046 kubelet[1780]: E0213 20:20:37.993869 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:37.995046 kubelet[1780]: W0213 20:20:37.993893 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:37.995046 kubelet[1780]: E0213 20:20:37.993914 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:37.995046 kubelet[1780]: E0213 20:20:37.994172 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:37.995046 kubelet[1780]: W0213 20:20:37.994185 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:37.995046 kubelet[1780]: E0213 20:20:37.994201 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:37.996068 kubelet[1780]: E0213 20:20:37.996036 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:37.996227 kubelet[1780]: W0213 20:20:37.996209 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:37.996333 kubelet[1780]: E0213 20:20:37.996316 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:38.001110 kubelet[1780]: E0213 20:20:38.001059 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:38.001435 kubelet[1780]: W0213 20:20:38.001407 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:38.001585 kubelet[1780]: E0213 20:20:38.001560 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:38.014988 kubelet[1780]: E0213 20:20:38.014087 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:38.014988 kubelet[1780]: W0213 20:20:38.014128 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:38.014988 kubelet[1780]: E0213 20:20:38.014167 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:38.020939 kubelet[1780]: E0213 20:20:38.020095 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:38.020939 kubelet[1780]: W0213 20:20:38.020240 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:38.020939 kubelet[1780]: E0213 20:20:38.020281 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:38.023661 kubelet[1780]: E0213 20:20:38.023298 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:38.023661 kubelet[1780]: W0213 20:20:38.023357 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:38.023661 kubelet[1780]: E0213 20:20:38.023392 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:38.024034 kubelet[1780]: E0213 20:20:38.023934 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:38.024034 kubelet[1780]: W0213 20:20:38.023955 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:38.024034 kubelet[1780]: E0213 20:20:38.023997 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:38.025236 kubelet[1780]: E0213 20:20:38.025196 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:38.025236 kubelet[1780]: W0213 20:20:38.025222 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:38.025236 kubelet[1780]: E0213 20:20:38.025255 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:38.027481 kubelet[1780]: E0213 20:20:38.027170 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:38.027481 kubelet[1780]: W0213 20:20:38.027210 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:38.028369 kubelet[1780]: E0213 20:20:38.027934 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:38.028369 kubelet[1780]: E0213 20:20:38.028125 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:38.028369 kubelet[1780]: W0213 20:20:38.028141 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:38.028369 kubelet[1780]: E0213 20:20:38.028163 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:38.203463 kubelet[1780]: E0213 20:20:38.202323 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:20:38.206162 containerd[1477]: time="2025-02-13T20:20:38.205033683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dt555,Uid:294a27f9-50f6-430b-a312-c48e7faed34c,Namespace:kube-system,Attempt:0,}" Feb 13 20:20:38.212518 kubelet[1780]: E0213 20:20:38.211352 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:20:38.212838 containerd[1477]: time="2025-02-13T20:20:38.212764404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kx66g,Uid:25577796-c71f-47e3-bc93-42cc57d164d9,Namespace:calico-system,Attempt:0,}" Feb 13 20:20:38.220365 systemd-resolved[1332]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Feb 13 20:20:38.826110 kubelet[1780]: E0213 20:20:38.825975 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:38.906023 containerd[1477]: time="2025-02-13T20:20:38.904490096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:20:38.911604 containerd[1477]: time="2025-02-13T20:20:38.911516654Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:20:38.913281 containerd[1477]: time="2025-02-13T20:20:38.913193288Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 20:20:38.914443 containerd[1477]: time="2025-02-13T20:20:38.914383082Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:20:38.915902 containerd[1477]: time="2025-02-13T20:20:38.915799559Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:20:38.920036 containerd[1477]: time="2025-02-13T20:20:38.919961663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:20:38.923914 containerd[1477]: time="2025-02-13T20:20:38.922060201Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 709.112151ms" Feb 13 20:20:38.926022 containerd[1477]: time="2025-02-13T20:20:38.925956205Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 720.75912ms" Feb 13 20:20:39.008245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3480729555.mount: Deactivated successfully. Feb 13 20:20:39.033197 kubelet[1780]: E0213 20:20:39.033117 1780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvn65" podUID="ce22ba38-b4f8-4031-88e9-0196a2ef8f62" Feb 13 20:20:39.176813 containerd[1477]: time="2025-02-13T20:20:39.176454666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:20:39.176813 containerd[1477]: time="2025-02-13T20:20:39.176565515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:20:39.177766 containerd[1477]: time="2025-02-13T20:20:39.177634441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:20:39.180892 containerd[1477]: time="2025-02-13T20:20:39.180734769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:20:39.182518 containerd[1477]: time="2025-02-13T20:20:39.182367632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:20:39.182661 containerd[1477]: time="2025-02-13T20:20:39.182545888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:20:39.182661 containerd[1477]: time="2025-02-13T20:20:39.182597751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:20:39.188974 containerd[1477]: time="2025-02-13T20:20:39.183385020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:20:39.347802 systemd[1]: run-containerd-runc-k8s.io-2535cdcd0a5ee92d5153d607f274b5c0ddf6c9a34af0162a811f78edf18c8e63-runc.aXWWqj.mount: Deactivated successfully. Feb 13 20:20:39.364328 systemd[1]: Started cri-containerd-2535cdcd0a5ee92d5153d607f274b5c0ddf6c9a34af0162a811f78edf18c8e63.scope - libcontainer container 2535cdcd0a5ee92d5153d607f274b5c0ddf6c9a34af0162a811f78edf18c8e63. Feb 13 20:20:39.380543 systemd[1]: Started cri-containerd-6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1.scope - libcontainer container 6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1. Feb 13 20:20:39.440528 containerd[1477]: time="2025-02-13T20:20:39.440106606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dt555,Uid:294a27f9-50f6-430b-a312-c48e7faed34c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2535cdcd0a5ee92d5153d607f274b5c0ddf6c9a34af0162a811f78edf18c8e63\"" Feb 13 20:20:39.445527 kubelet[1780]: E0213 20:20:39.444259 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:20:39.448884 containerd[1477]: time="2025-02-13T20:20:39.448709058Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 20:20:39.458892 containerd[1477]: time="2025-02-13T20:20:39.458339222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kx66g,Uid:25577796-c71f-47e3-bc93-42cc57d164d9,Namespace:calico-system,Attempt:0,} returns sandbox id \"6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1\"" Feb 13 20:20:39.460483 kubelet[1780]: E0213 20:20:39.460070 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:20:39.826446 kubelet[1780]: E0213 20:20:39.826322 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:40.827749 kubelet[1780]: E0213 20:20:40.827663 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:41.047640 kubelet[1780]: E0213 20:20:41.043002 1780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvn65" podUID="ce22ba38-b4f8-4031-88e9-0196a2ef8f62" Feb 13 20:20:41.078077 kernel: hrtimer: interrupt took 4329748 ns Feb 13 20:20:41.317105 systemd-resolved[1332]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Feb 13 20:20:41.582381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount332625034.mount: Deactivated successfully. Feb 13 20:20:41.828307 kubelet[1780]: E0213 20:20:41.828075 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:42.313376 containerd[1477]: time="2025-02-13T20:20:42.313244493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:42.321743 containerd[1477]: time="2025-02-13T20:20:42.321310212Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 20:20:42.323771 containerd[1477]: time="2025-02-13T20:20:42.323701822Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:42.329894 containerd[1477]: time="2025-02-13T20:20:42.328223906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:42.330207 containerd[1477]: time="2025-02-13T20:20:42.330159148Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 2.881076379s" Feb 13 20:20:42.330327 containerd[1477]: time="2025-02-13T20:20:42.330307383Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 20:20:42.333240 containerd[1477]: time="2025-02-13T20:20:42.333191629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:20:42.335906 containerd[1477]: time="2025-02-13T20:20:42.335821458Z" level=info msg="CreateContainer within sandbox \"2535cdcd0a5ee92d5153d607f274b5c0ddf6c9a34af0162a811f78edf18c8e63\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:20:42.364539 containerd[1477]: time="2025-02-13T20:20:42.364440719Z" level=info msg="CreateContainer within sandbox \"2535cdcd0a5ee92d5153d607f274b5c0ddf6c9a34af0162a811f78edf18c8e63\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5af6651d6181226a4f6634d7df451ac874b906dfd4d62188174670e0bd646d19\"" Feb 13 20:20:42.366231 containerd[1477]: time="2025-02-13T20:20:42.366193242Z" level=info msg="StartContainer for \"5af6651d6181226a4f6634d7df451ac874b906dfd4d62188174670e0bd646d19\"" Feb 13 20:20:42.422593 systemd[1]: run-containerd-runc-k8s.io-5af6651d6181226a4f6634d7df451ac874b906dfd4d62188174670e0bd646d19-runc.kuMvGz.mount: Deactivated successfully. Feb 13 20:20:42.434216 systemd[1]: Started cri-containerd-5af6651d6181226a4f6634d7df451ac874b906dfd4d62188174670e0bd646d19.scope - libcontainer container 5af6651d6181226a4f6634d7df451ac874b906dfd4d62188174670e0bd646d19. Feb 13 20:20:42.492883 containerd[1477]: time="2025-02-13T20:20:42.492505119Z" level=info msg="StartContainer for \"5af6651d6181226a4f6634d7df451ac874b906dfd4d62188174670e0bd646d19\" returns successfully" Feb 13 20:20:42.828657 kubelet[1780]: E0213 20:20:42.828572 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:43.033073 kubelet[1780]: E0213 20:20:43.032970 1780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvn65" podUID="ce22ba38-b4f8-4031-88e9-0196a2ef8f62" Feb 13 20:20:43.141101 kubelet[1780]: E0213 20:20:43.140888 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:20:43.158987 kubelet[1780]: I0213 20:20:43.158712 1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dt555" podStartSLOduration=4.272926758 podStartE2EDuration="7.158687502s" podCreationTimestamp="2025-02-13 20:20:36 +0000 UTC" firstStartedPulling="2025-02-13 20:20:39.446364478 +0000 UTC m=+4.843822980" lastFinishedPulling="2025-02-13 20:20:42.332125196 +0000 UTC m=+7.729583724" observedRunningTime="2025-02-13 20:20:43.158355617 +0000 UTC m=+8.555814145" watchObservedRunningTime="2025-02-13 20:20:43.158687502 +0000 UTC m=+8.556146034" Feb 13 20:20:43.199839 kubelet[1780]: E0213 20:20:43.199782 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.199839 kubelet[1780]: W0213 20:20:43.199826 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.200136 kubelet[1780]: E0213 20:20:43.199898 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.200610 kubelet[1780]: E0213 20:20:43.200335 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.200610 kubelet[1780]: W0213 20:20:43.200363 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.200610 kubelet[1780]: E0213 20:20:43.200383 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.200880 kubelet[1780]: E0213 20:20:43.200811 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.200880 kubelet[1780]: W0213 20:20:43.200875 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.200980 kubelet[1780]: E0213 20:20:43.200895 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.201260 kubelet[1780]: E0213 20:20:43.201237 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.201323 kubelet[1780]: W0213 20:20:43.201273 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.201323 kubelet[1780]: E0213 20:20:43.201289 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.201658 kubelet[1780]: E0213 20:20:43.201622 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.201658 kubelet[1780]: W0213 20:20:43.201656 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.202027 kubelet[1780]: E0213 20:20:43.201673 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.202070 kubelet[1780]: E0213 20:20:43.202044 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.202070 kubelet[1780]: W0213 20:20:43.202058 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.202139 kubelet[1780]: E0213 20:20:43.202093 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.202415 kubelet[1780]: E0213 20:20:43.202396 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.202415 kubelet[1780]: W0213 20:20:43.202415 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.202526 kubelet[1780]: E0213 20:20:43.202429 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.202731 kubelet[1780]: E0213 20:20:43.202713 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.202731 kubelet[1780]: W0213 20:20:43.202729 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.202821 kubelet[1780]: E0213 20:20:43.202751 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.203563 kubelet[1780]: E0213 20:20:43.203119 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.203563 kubelet[1780]: W0213 20:20:43.203150 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.203563 kubelet[1780]: E0213 20:20:43.203165 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.203563 kubelet[1780]: E0213 20:20:43.203374 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.203563 kubelet[1780]: W0213 20:20:43.203384 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.203563 kubelet[1780]: E0213 20:20:43.203397 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.203901 kubelet[1780]: E0213 20:20:43.203585 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.203901 kubelet[1780]: W0213 20:20:43.203594 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.203901 kubelet[1780]: E0213 20:20:43.203605 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.203901 kubelet[1780]: E0213 20:20:43.203796 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.203901 kubelet[1780]: W0213 20:20:43.203806 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.203901 kubelet[1780]: E0213 20:20:43.203817 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.204161 kubelet[1780]: E0213 20:20:43.204129 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.204161 kubelet[1780]: W0213 20:20:43.204142 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.204161 kubelet[1780]: E0213 20:20:43.204156 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.204451 kubelet[1780]: E0213 20:20:43.204432 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.204451 kubelet[1780]: W0213 20:20:43.204449 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.204546 kubelet[1780]: E0213 20:20:43.204463 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.204687 kubelet[1780]: E0213 20:20:43.204671 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.204687 kubelet[1780]: W0213 20:20:43.204686 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.204787 kubelet[1780]: E0213 20:20:43.204698 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.204973 kubelet[1780]: E0213 20:20:43.204955 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.204973 kubelet[1780]: W0213 20:20:43.204970 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.205082 kubelet[1780]: E0213 20:20:43.204983 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.205186 kubelet[1780]: E0213 20:20:43.205170 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.205186 kubelet[1780]: W0213 20:20:43.205181 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.205314 kubelet[1780]: E0213 20:20:43.205190 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.205408 kubelet[1780]: E0213 20:20:43.205394 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.205408 kubelet[1780]: W0213 20:20:43.205406 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.205488 kubelet[1780]: E0213 20:20:43.205415 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.205585 kubelet[1780]: E0213 20:20:43.205571 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.205585 kubelet[1780]: W0213 20:20:43.205584 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.205672 kubelet[1780]: E0213 20:20:43.205595 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.205782 kubelet[1780]: E0213 20:20:43.205767 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.205782 kubelet[1780]: W0213 20:20:43.205781 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.205880 kubelet[1780]: E0213 20:20:43.205792 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.244643 kubelet[1780]: E0213 20:20:43.244385 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.244643 kubelet[1780]: W0213 20:20:43.244415 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.244643 kubelet[1780]: E0213 20:20:43.244439 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.245092 kubelet[1780]: E0213 20:20:43.244966 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.245092 kubelet[1780]: W0213 20:20:43.244993 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.245092 kubelet[1780]: E0213 20:20:43.245040 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.246372 kubelet[1780]: E0213 20:20:43.246054 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.246372 kubelet[1780]: W0213 20:20:43.246081 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.246372 kubelet[1780]: E0213 20:20:43.246118 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.248112 kubelet[1780]: E0213 20:20:43.246394 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.248112 kubelet[1780]: W0213 20:20:43.246407 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.248112 kubelet[1780]: E0213 20:20:43.246430 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.248112 kubelet[1780]: E0213 20:20:43.246616 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.248112 kubelet[1780]: W0213 20:20:43.246623 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.248112 kubelet[1780]: E0213 20:20:43.246632 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.248112 kubelet[1780]: E0213 20:20:43.246896 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.248112 kubelet[1780]: W0213 20:20:43.246916 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.248112 kubelet[1780]: E0213 20:20:43.246981 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.248112 kubelet[1780]: E0213 20:20:43.247430 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.248495 kubelet[1780]: W0213 20:20:43.247445 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.248495 kubelet[1780]: E0213 20:20:43.247488 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.248495 kubelet[1780]: E0213 20:20:43.247656 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.248495 kubelet[1780]: W0213 20:20:43.247667 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.248495 kubelet[1780]: E0213 20:20:43.247684 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.248495 kubelet[1780]: E0213 20:20:43.247970 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.248495 kubelet[1780]: W0213 20:20:43.247984 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.248495 kubelet[1780]: E0213 20:20:43.248011 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.248495 kubelet[1780]: E0213 20:20:43.248297 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.248495 kubelet[1780]: W0213 20:20:43.248310 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.249424 kubelet[1780]: E0213 20:20:43.248322 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.249424 kubelet[1780]: E0213 20:20:43.248974 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.249424 kubelet[1780]: W0213 20:20:43.248989 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.249424 kubelet[1780]: E0213 20:20:43.249001 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.249801 kubelet[1780]: E0213 20:20:43.249698 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:43.249801 kubelet[1780]: W0213 20:20:43.249720 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:43.249801 kubelet[1780]: E0213 20:20:43.249739 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:43.829477 kubelet[1780]: E0213 20:20:43.829080 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:44.058901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2222317023.mount: Deactivated successfully. Feb 13 20:20:44.143803 kubelet[1780]: E0213 20:20:44.143296 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:20:44.216747 kubelet[1780]: E0213 20:20:44.215159 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.216747 kubelet[1780]: W0213 20:20:44.215312 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.216747 kubelet[1780]: E0213 20:20:44.215519 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.217234 kubelet[1780]: E0213 20:20:44.217204 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.217345 kubelet[1780]: W0213 20:20:44.217264 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.217345 kubelet[1780]: E0213 20:20:44.217295 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.217829 kubelet[1780]: E0213 20:20:44.217797 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.217829 kubelet[1780]: W0213 20:20:44.217821 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.218015 kubelet[1780]: E0213 20:20:44.217896 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.218273 kubelet[1780]: E0213 20:20:44.218250 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.218349 kubelet[1780]: W0213 20:20:44.218271 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.218349 kubelet[1780]: E0213 20:20:44.218314 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.218713 kubelet[1780]: E0213 20:20:44.218592 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.218713 kubelet[1780]: W0213 20:20:44.218635 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.218713 kubelet[1780]: E0213 20:20:44.218647 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.218967 kubelet[1780]: E0213 20:20:44.218948 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.218967 kubelet[1780]: W0213 20:20:44.218963 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.220114 kubelet[1780]: E0213 20:20:44.218996 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.220114 kubelet[1780]: E0213 20:20:44.219464 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.220114 kubelet[1780]: W0213 20:20:44.219475 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.220114 kubelet[1780]: E0213 20:20:44.219485 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.220114 kubelet[1780]: E0213 20:20:44.219748 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.220114 kubelet[1780]: W0213 20:20:44.219758 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.220114 kubelet[1780]: E0213 20:20:44.219771 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.220114 kubelet[1780]: E0213 20:20:44.220140 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.220543 kubelet[1780]: W0213 20:20:44.220155 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.220543 kubelet[1780]: E0213 20:20:44.220172 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.220543 kubelet[1780]: E0213 20:20:44.220504 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.220543 kubelet[1780]: W0213 20:20:44.220517 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.220543 kubelet[1780]: E0213 20:20:44.220532 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.220959 kubelet[1780]: E0213 20:20:44.220936 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.220959 kubelet[1780]: W0213 20:20:44.220955 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.221058 kubelet[1780]: E0213 20:20:44.220972 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.221319 kubelet[1780]: E0213 20:20:44.221299 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.221319 kubelet[1780]: W0213 20:20:44.221314 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.221412 kubelet[1780]: E0213 20:20:44.221325 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.221599 kubelet[1780]: E0213 20:20:44.221583 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.221599 kubelet[1780]: W0213 20:20:44.221596 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.221680 kubelet[1780]: E0213 20:20:44.221606 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.221773 kubelet[1780]: E0213 20:20:44.221759 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.221826 kubelet[1780]: W0213 20:20:44.221775 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.221826 kubelet[1780]: E0213 20:20:44.221782 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.222096 kubelet[1780]: E0213 20:20:44.222076 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.222096 kubelet[1780]: W0213 20:20:44.222094 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.222187 kubelet[1780]: E0213 20:20:44.222111 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.222508 kubelet[1780]: E0213 20:20:44.222482 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.222508 kubelet[1780]: W0213 20:20:44.222501 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.222610 kubelet[1780]: E0213 20:20:44.222516 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.222740 kubelet[1780]: E0213 20:20:44.222725 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.222740 kubelet[1780]: W0213 20:20:44.222737 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.222818 kubelet[1780]: E0213 20:20:44.222746 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.223035 kubelet[1780]: E0213 20:20:44.222985 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.223035 kubelet[1780]: W0213 20:20:44.222999 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.223035 kubelet[1780]: E0213 20:20:44.223008 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.223389 kubelet[1780]: E0213 20:20:44.223372 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.223389 kubelet[1780]: W0213 20:20:44.223387 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.223465 kubelet[1780]: E0213 20:20:44.223401 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.223733 kubelet[1780]: E0213 20:20:44.223713 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.223733 kubelet[1780]: W0213 20:20:44.223731 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.223834 kubelet[1780]: E0213 20:20:44.223745 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.233906 containerd[1477]: time="2025-02-13T20:20:44.233772387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:44.236324 containerd[1477]: time="2025-02-13T20:20:44.235917700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 20:20:44.238886 containerd[1477]: time="2025-02-13T20:20:44.237661173Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:44.243625 containerd[1477]: time="2025-02-13T20:20:44.243549484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:44.245437 containerd[1477]: time="2025-02-13T20:20:44.245372492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.911938983s" Feb 13 20:20:44.245630 containerd[1477]: time="2025-02-13T20:20:44.245610303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 20:20:44.260371 kubelet[1780]: E0213 20:20:44.252476 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.260371 kubelet[1780]: W0213 20:20:44.252541 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.260371 kubelet[1780]: E0213 20:20:44.252570 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.260371 kubelet[1780]: E0213 20:20:44.252986 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.260371 kubelet[1780]: W0213 20:20:44.253002 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.260371 kubelet[1780]: E0213 20:20:44.253019 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.260371 kubelet[1780]: E0213 20:20:44.253390 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.260371 kubelet[1780]: W0213 20:20:44.253403 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.260371 kubelet[1780]: E0213 20:20:44.253417 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.260371 kubelet[1780]: E0213 20:20:44.253777 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.261313 kubelet[1780]: W0213 20:20:44.253802 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.261313 kubelet[1780]: E0213 20:20:44.253819 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.261313 kubelet[1780]: E0213 20:20:44.254073 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.261313 kubelet[1780]: W0213 20:20:44.254105 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.261313 kubelet[1780]: E0213 20:20:44.254121 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.261313 kubelet[1780]: E0213 20:20:44.254429 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.261313 kubelet[1780]: W0213 20:20:44.254442 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.261313 kubelet[1780]: E0213 20:20:44.254457 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.261313 kubelet[1780]: E0213 20:20:44.255214 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.261313 kubelet[1780]: W0213 20:20:44.255230 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.261739 kubelet[1780]: E0213 20:20:44.255245 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.261739 kubelet[1780]: E0213 20:20:44.255522 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.261739 kubelet[1780]: W0213 20:20:44.255534 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.261739 kubelet[1780]: E0213 20:20:44.255548 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.261739 kubelet[1780]: E0213 20:20:44.255782 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.261739 kubelet[1780]: W0213 20:20:44.255797 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.261739 kubelet[1780]: E0213 20:20:44.255809 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.261739 kubelet[1780]: E0213 20:20:44.256038 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.261739 kubelet[1780]: W0213 20:20:44.256049 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.261739 kubelet[1780]: E0213 20:20:44.256061 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.262945 containerd[1477]: time="2025-02-13T20:20:44.261598778Z" level=info msg="CreateContainer within sandbox \"6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:20:44.263024 kubelet[1780]: E0213 20:20:44.256278 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.263024 kubelet[1780]: W0213 20:20:44.256289 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.263024 kubelet[1780]: E0213 20:20:44.256301 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.263024 kubelet[1780]: E0213 20:20:44.256714 1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:44.263024 kubelet[1780]: W0213 20:20:44.256724 1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:44.263024 kubelet[1780]: E0213 20:20:44.256737 1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:44.305448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2284362439.mount: Deactivated successfully. Feb 13 20:20:44.331623 containerd[1477]: time="2025-02-13T20:20:44.331048032Z" level=info msg="CreateContainer within sandbox \"6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc\"" Feb 13 20:20:44.333037 containerd[1477]: time="2025-02-13T20:20:44.332982124Z" level=info msg="StartContainer for \"4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc\"" Feb 13 20:20:44.394075 systemd-resolved[1332]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Feb 13 20:20:44.417210 systemd[1]: Started cri-containerd-4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc.scope - libcontainer container 4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc. Feb 13 20:20:44.493913 containerd[1477]: time="2025-02-13T20:20:44.493727091Z" level=info msg="StartContainer for \"4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc\" returns successfully" Feb 13 20:20:44.527511 systemd[1]: cri-containerd-4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc.scope: Deactivated successfully. Feb 13 20:20:44.705281 containerd[1477]: time="2025-02-13T20:20:44.703990398Z" level=info msg="shim disconnected" id=4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc namespace=k8s.io Feb 13 20:20:44.705281 containerd[1477]: time="2025-02-13T20:20:44.704060344Z" level=warning msg="cleaning up after shim disconnected" id=4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc namespace=k8s.io Feb 13 20:20:44.705281 containerd[1477]: time="2025-02-13T20:20:44.704072889Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:20:44.830578 kubelet[1780]: E0213 20:20:44.830392 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:44.989504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc-rootfs.mount: Deactivated successfully. Feb 13 20:20:45.033512 kubelet[1780]: E0213 20:20:45.032972 1780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvn65" podUID="ce22ba38-b4f8-4031-88e9-0196a2ef8f62" Feb 13 20:20:45.152646 kubelet[1780]: E0213 20:20:45.152578 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:20:45.156395 containerd[1477]: time="2025-02-13T20:20:45.156328612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:20:45.831708 kubelet[1780]: E0213 20:20:45.831381 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:46.831882 kubelet[1780]: E0213 20:20:46.831771 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:47.034399 kubelet[1780]: E0213 20:20:47.033824 1780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvn65" podUID="ce22ba38-b4f8-4031-88e9-0196a2ef8f62" Feb 13 20:20:47.832631 kubelet[1780]: E0213 20:20:47.832537 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:48.833174 kubelet[1780]: E0213 20:20:48.832922 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:49.040579 kubelet[1780]: E0213 20:20:49.040080 1780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvn65" podUID="ce22ba38-b4f8-4031-88e9-0196a2ef8f62" Feb 13 20:20:49.834595 kubelet[1780]: E0213 20:20:49.834502 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:50.836540 kubelet[1780]: E0213 20:20:50.836446 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:51.035350 kubelet[1780]: E0213 20:20:51.034439 1780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvn65" podUID="ce22ba38-b4f8-4031-88e9-0196a2ef8f62" Feb 13 20:20:51.837315 kubelet[1780]: E0213 20:20:51.837135 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:52.503897 containerd[1477]: time="2025-02-13T20:20:52.502169492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:52.504806 containerd[1477]: time="2025-02-13T20:20:52.504734740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 20:20:52.506887 containerd[1477]: time="2025-02-13T20:20:52.506806141Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:52.520182 containerd[1477]: time="2025-02-13T20:20:52.517262517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:52.520182 containerd[1477]: time="2025-02-13T20:20:52.518772127Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 7.362378261s" Feb 13 20:20:52.520182 containerd[1477]: time="2025-02-13T20:20:52.518832351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 20:20:52.524183 containerd[1477]: time="2025-02-13T20:20:52.524107871Z" level=info msg="CreateContainer within sandbox \"6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:20:52.639779 containerd[1477]: time="2025-02-13T20:20:52.639648364Z" level=info msg="CreateContainer within sandbox \"6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d\"" Feb 13 20:20:52.645904 containerd[1477]: time="2025-02-13T20:20:52.643687453Z" level=info msg="StartContainer for \"3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d\"" Feb 13 20:20:52.716308 systemd[1]: Started cri-containerd-3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d.scope - libcontainer container 3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d. Feb 13 20:20:52.818112 containerd[1477]: time="2025-02-13T20:20:52.817923759Z" level=info msg="StartContainer for \"3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d\" returns successfully" Feb 13 20:20:52.838161 kubelet[1780]: E0213 20:20:52.838037 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:53.033829 kubelet[1780]: E0213 20:20:53.033231 1780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvn65" podUID="ce22ba38-b4f8-4031-88e9-0196a2ef8f62" Feb 13 20:20:53.191005 kubelet[1780]: E0213 20:20:53.190808 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:20:53.839021 kubelet[1780]: E0213 20:20:53.838964 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:54.085664 containerd[1477]: time="2025-02-13T20:20:54.085082515Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:20:54.088817 systemd[1]: cri-containerd-3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d.scope: Deactivated successfully. Feb 13 20:20:54.130162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d-rootfs.mount: Deactivated successfully. Feb 13 20:20:54.149408 kubelet[1780]: I0213 20:20:54.149366 1780 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 20:20:54.210284 kubelet[1780]: E0213 20:20:54.208365 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:20:54.387033 containerd[1477]: time="2025-02-13T20:20:54.386389426Z" level=info msg="shim disconnected" id=3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d namespace=k8s.io Feb 13 20:20:54.387033 containerd[1477]: time="2025-02-13T20:20:54.386490837Z" level=warning msg="cleaning up after shim disconnected" id=3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d namespace=k8s.io Feb 13 20:20:54.387033 containerd[1477]: time="2025-02-13T20:20:54.386507204Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:20:54.840099 kubelet[1780]: E0213 20:20:54.840012 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:55.055638 systemd[1]: Created slice kubepods-besteffort-podce22ba38_b4f8_4031_88e9_0196a2ef8f62.slice - libcontainer container kubepods-besteffort-podce22ba38_b4f8_4031_88e9_0196a2ef8f62.slice. Feb 13 20:20:55.072858 containerd[1477]: time="2025-02-13T20:20:55.072558114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hvn65,Uid:ce22ba38-b4f8-4031-88e9-0196a2ef8f62,Namespace:calico-system,Attempt:0,}" Feb 13 20:20:55.222144 kubelet[1780]: E0213 20:20:55.220872 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:20:55.228928 containerd[1477]: time="2025-02-13T20:20:55.228393405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:20:55.248796 containerd[1477]: time="2025-02-13T20:20:55.240309779Z" level=error msg="Failed to destroy network for sandbox \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:55.248796 containerd[1477]: time="2025-02-13T20:20:55.241216995Z" level=error msg="encountered an error cleaning up failed sandbox \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:55.248796 containerd[1477]: time="2025-02-13T20:20:55.243982271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hvn65,Uid:ce22ba38-b4f8-4031-88e9-0196a2ef8f62,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:55.249030 kubelet[1780]: E0213 20:20:55.248238 1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:55.249030 kubelet[1780]: E0213 20:20:55.248332 1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hvn65" Feb 13 20:20:55.249030 kubelet[1780]: E0213 20:20:55.248369 1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hvn65" Feb 13 20:20:55.244296 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2-shm.mount: Deactivated successfully. Feb 13 20:20:55.249660 kubelet[1780]: E0213 20:20:55.248436 1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hvn65_calico-system(ce22ba38-b4f8-4031-88e9-0196a2ef8f62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hvn65_calico-system(ce22ba38-b4f8-4031-88e9-0196a2ef8f62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hvn65" podUID="ce22ba38-b4f8-4031-88e9-0196a2ef8f62" Feb 13 20:20:55.821714 kubelet[1780]: E0213 20:20:55.821633 1780 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:55.840809 kubelet[1780]: E0213 20:20:55.840709 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:56.227332 kubelet[1780]: I0213 20:20:56.226565 1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Feb 13 20:20:56.228538 containerd[1477]: time="2025-02-13T20:20:56.228436326Z" level=info msg="StopPodSandbox for \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\"" Feb 13 20:20:56.228902 containerd[1477]: time="2025-02-13T20:20:56.228782142Z" level=info msg="Ensure that sandbox af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2 in task-service has been cleanup successfully" Feb 13 20:20:56.287319 containerd[1477]: time="2025-02-13T20:20:56.287100428Z" level=error msg="StopPodSandbox for \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\" failed" error="failed to destroy network for sandbox \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:56.288539 kubelet[1780]: E0213 20:20:56.287622 1780 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Feb 13 20:20:56.288539 kubelet[1780]: E0213 20:20:56.287713 1780 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2"} Feb 13 20:20:56.288539 kubelet[1780]: E0213 20:20:56.287802 1780 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce22ba38-b4f8-4031-88e9-0196a2ef8f62\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:20:56.288539 kubelet[1780]: E0213 20:20:56.287839 1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce22ba38-b4f8-4031-88e9-0196a2ef8f62\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hvn65" podUID="ce22ba38-b4f8-4031-88e9-0196a2ef8f62" Feb 13 20:20:56.477911 kubelet[1780]: I0213 20:20:56.477290 1780 topology_manager.go:215] "Topology Admit Handler" podUID="ca35f01e-2a12-410b-956b-c6dadd6e67ef" podNamespace="calico-system" podName="calico-typha-75c5fb64cf-mj8wz" Feb 13 20:20:56.490371 systemd[1]: Created slice kubepods-besteffort-podca35f01e_2a12_410b_956b_c6dadd6e67ef.slice - libcontainer container kubepods-besteffort-podca35f01e_2a12_410b_956b_c6dadd6e67ef.slice. Feb 13 20:20:56.523361 kubelet[1780]: I0213 20:20:56.523011 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca35f01e-2a12-410b-956b-c6dadd6e67ef-tigera-ca-bundle\") pod \"calico-typha-75c5fb64cf-mj8wz\" (UID: \"ca35f01e-2a12-410b-956b-c6dadd6e67ef\") " pod="calico-system/calico-typha-75c5fb64cf-mj8wz" Feb 13 20:20:56.523361 kubelet[1780]: I0213 20:20:56.523066 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhlt5\" (UniqueName: \"kubernetes.io/projected/ca35f01e-2a12-410b-956b-c6dadd6e67ef-kube-api-access-dhlt5\") pod \"calico-typha-75c5fb64cf-mj8wz\" (UID: \"ca35f01e-2a12-410b-956b-c6dadd6e67ef\") " pod="calico-system/calico-typha-75c5fb64cf-mj8wz" Feb 13 20:20:56.523361 kubelet[1780]: I0213 20:20:56.523113 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ca35f01e-2a12-410b-956b-c6dadd6e67ef-typha-certs\") pod \"calico-typha-75c5fb64cf-mj8wz\" (UID: \"ca35f01e-2a12-410b-956b-c6dadd6e67ef\") " pod="calico-system/calico-typha-75c5fb64cf-mj8wz" Feb 13 20:20:56.797253 kubelet[1780]: E0213 20:20:56.796430 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:20:56.797988 containerd[1477]: time="2025-02-13T20:20:56.797925472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75c5fb64cf-mj8wz,Uid:ca35f01e-2a12-410b-956b-c6dadd6e67ef,Namespace:calico-system,Attempt:0,}" Feb 13 20:20:56.841830 kubelet[1780]: E0213 20:20:56.841779 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:56.856688 containerd[1477]: time="2025-02-13T20:20:56.854839413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:20:56.856688 containerd[1477]: time="2025-02-13T20:20:56.856067508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:20:56.856688 containerd[1477]: time="2025-02-13T20:20:56.856091919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:20:56.856688 containerd[1477]: time="2025-02-13T20:20:56.856232791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:20:56.892529 systemd[1]: Started cri-containerd-5bd2e24ca230494abe66e60de1b05555783559c718f4c6c42f7bf459e481b681.scope - libcontainer container 5bd2e24ca230494abe66e60de1b05555783559c718f4c6c42f7bf459e481b681. Feb 13 20:20:56.985934 containerd[1477]: time="2025-02-13T20:20:56.985885508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75c5fb64cf-mj8wz,Uid:ca35f01e-2a12-410b-956b-c6dadd6e67ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"5bd2e24ca230494abe66e60de1b05555783559c718f4c6c42f7bf459e481b681\"" Feb 13 20:20:56.988087 kubelet[1780]: E0213 20:20:56.987447 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:20:57.843546 kubelet[1780]: E0213 20:20:57.843473 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:58.102108 kubelet[1780]: I0213 20:20:58.021672 1780 topology_manager.go:215] "Topology Admit Handler" podUID="8386e5da-6e1b-4bc1-b820-a5872769500e" podNamespace="calico-system" podName="calico-kube-controllers-5d558c6c6c-njt6l" Feb 13 20:20:58.102108 kubelet[1780]: I0213 20:20:58.043483 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8386e5da-6e1b-4bc1-b820-a5872769500e-tigera-ca-bundle\") pod \"calico-kube-controllers-5d558c6c6c-njt6l\" (UID: \"8386e5da-6e1b-4bc1-b820-a5872769500e\") " pod="calico-system/calico-kube-controllers-5d558c6c6c-njt6l" Feb 13 20:20:58.102108 kubelet[1780]: I0213 20:20:58.043585 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnqkz\" (UniqueName: \"kubernetes.io/projected/8386e5da-6e1b-4bc1-b820-a5872769500e-kube-api-access-tnqkz\") pod \"calico-kube-controllers-5d558c6c6c-njt6l\" (UID: \"8386e5da-6e1b-4bc1-b820-a5872769500e\") " pod="calico-system/calico-kube-controllers-5d558c6c6c-njt6l" Feb 13 20:20:58.031095 systemd[1]: Created slice kubepods-besteffort-pod8386e5da_6e1b_4bc1_b820_a5872769500e.slice - libcontainer container kubepods-besteffort-pod8386e5da_6e1b_4bc1_b820_a5872769500e.slice. Feb 13 20:20:58.340919 containerd[1477]: time="2025-02-13T20:20:58.340410877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d558c6c6c-njt6l,Uid:8386e5da-6e1b-4bc1-b820-a5872769500e,Namespace:calico-system,Attempt:0,}" Feb 13 20:20:58.524143 containerd[1477]: time="2025-02-13T20:20:58.522659927Z" level=error msg="Failed to destroy network for sandbox \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:58.528988 containerd[1477]: time="2025-02-13T20:20:58.527447358Z" level=error msg="encountered an error cleaning up failed sandbox \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:58.528988 containerd[1477]: time="2025-02-13T20:20:58.527563096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d558c6c6c-njt6l,Uid:8386e5da-6e1b-4bc1-b820-a5872769500e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:58.528294 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286-shm.mount: Deactivated successfully. Feb 13 20:20:58.529724 kubelet[1780]: E0213 20:20:58.527927 1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:58.529724 kubelet[1780]: E0213 20:20:58.528011 1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d558c6c6c-njt6l" Feb 13 20:20:58.529724 kubelet[1780]: E0213 20:20:58.528043 1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d558c6c6c-njt6l" Feb 13 20:20:58.530508 kubelet[1780]: E0213 20:20:58.528153 1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d558c6c6c-njt6l_calico-system(8386e5da-6e1b-4bc1-b820-a5872769500e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d558c6c6c-njt6l_calico-system(8386e5da-6e1b-4bc1-b820-a5872769500e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d558c6c6c-njt6l" podUID="8386e5da-6e1b-4bc1-b820-a5872769500e" Feb 13 20:20:58.844213 kubelet[1780]: E0213 20:20:58.844148 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:20:59.167366 kubelet[1780]: I0213 20:20:59.166650 1780 topology_manager.go:215] "Topology Admit Handler" podUID="d11c2fc0-063e-4017-ba54-3c29f7590e21" podNamespace="default" podName="nginx-deployment-85f456d6dd-8289b" Feb 13 20:20:59.182592 systemd[1]: Created slice kubepods-besteffort-podd11c2fc0_063e_4017_ba54_3c29f7590e21.slice - libcontainer container kubepods-besteffort-podd11c2fc0_063e_4017_ba54_3c29f7590e21.slice. Feb 13 20:20:59.250397 kubelet[1780]: I0213 20:20:59.249187 1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Feb 13 20:20:59.250661 containerd[1477]: time="2025-02-13T20:20:59.250543896Z" level=info msg="StopPodSandbox for \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\"" Feb 13 20:20:59.251098 containerd[1477]: time="2025-02-13T20:20:59.251049934Z" level=info msg="Ensure that sandbox dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286 in task-service has been cleanup successfully" Feb 13 20:20:59.352635 kubelet[1780]: I0213 20:20:59.352542 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wv2r\" (UniqueName: \"kubernetes.io/projected/d11c2fc0-063e-4017-ba54-3c29f7590e21-kube-api-access-2wv2r\") pod \"nginx-deployment-85f456d6dd-8289b\" (UID: \"d11c2fc0-063e-4017-ba54-3c29f7590e21\") " pod="default/nginx-deployment-85f456d6dd-8289b" Feb 13 20:20:59.364796 containerd[1477]: time="2025-02-13T20:20:59.364089867Z" level=error msg="StopPodSandbox for \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\" failed" error="failed to destroy network for sandbox \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:59.365374 kubelet[1780]: E0213 20:20:59.364393 1780 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Feb 13 20:20:59.365374 kubelet[1780]: E0213 20:20:59.364446 1780 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286"} Feb 13 20:20:59.365374 kubelet[1780]: E0213 20:20:59.364513 1780 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8386e5da-6e1b-4bc1-b820-a5872769500e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:20:59.365374 kubelet[1780]: E0213 20:20:59.364540 1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8386e5da-6e1b-4bc1-b820-a5872769500e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d558c6c6c-njt6l" podUID="8386e5da-6e1b-4bc1-b820-a5872769500e" Feb 13 20:20:59.494880 containerd[1477]: time="2025-02-13T20:20:59.493509016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-8289b,Uid:d11c2fc0-063e-4017-ba54-3c29f7590e21,Namespace:default,Attempt:0,}" Feb 13 20:20:59.656231 containerd[1477]: time="2025-02-13T20:20:59.656149620Z" level=error msg="Failed to destroy network for sandbox \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:59.658088 containerd[1477]: time="2025-02-13T20:20:59.656636615Z" level=error msg="encountered an error cleaning up failed sandbox \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:59.658088 containerd[1477]: time="2025-02-13T20:20:59.656718161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-8289b,Uid:d11c2fc0-063e-4017-ba54-3c29f7590e21,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:59.658290 kubelet[1780]: E0213 20:20:59.657125 1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:59.658290 kubelet[1780]: E0213 20:20:59.657203 1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-8289b" Feb 13 20:20:59.658290 kubelet[1780]: E0213 20:20:59.657230 1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-8289b" Feb 13 20:20:59.658462 kubelet[1780]: E0213 20:20:59.657275 1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-8289b_default(d11c2fc0-063e-4017-ba54-3c29f7590e21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-8289b_default(d11c2fc0-063e-4017-ba54-3c29f7590e21)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-8289b" podUID="d11c2fc0-063e-4017-ba54-3c29f7590e21" Feb 13 20:20:59.660727 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3-shm.mount: Deactivated successfully. Feb 13 20:20:59.845073 kubelet[1780]: E0213 20:20:59.845008 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:00.254710 kubelet[1780]: I0213 20:21:00.254060 1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Feb 13 20:21:00.255902 containerd[1477]: time="2025-02-13T20:21:00.255775540Z" level=info msg="StopPodSandbox for \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\"" Feb 13 20:21:00.256783 containerd[1477]: time="2025-02-13T20:21:00.256172279Z" level=info msg="Ensure that sandbox 79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3 in task-service has been cleanup successfully" Feb 13 20:21:00.365423 containerd[1477]: time="2025-02-13T20:21:00.365348220Z" level=error msg="StopPodSandbox for \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\" failed" error="failed to destroy network for sandbox \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:21:00.366841 kubelet[1780]: E0213 20:21:00.366770 1780 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Feb 13 20:21:00.366841 kubelet[1780]: E0213 20:21:00.366898 1780 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3"} Feb 13 20:21:00.366841 kubelet[1780]: E0213 20:21:00.366978 1780 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d11c2fc0-063e-4017-ba54-3c29f7590e21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:21:00.367780 kubelet[1780]: E0213 20:21:00.367020 1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d11c2fc0-063e-4017-ba54-3c29f7590e21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-8289b" podUID="d11c2fc0-063e-4017-ba54-3c29f7590e21" Feb 13 20:21:00.846267 kubelet[1780]: E0213 20:21:00.846152 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:01.846517 kubelet[1780]: E0213 20:21:01.846401 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:02.847349 kubelet[1780]: E0213 20:21:02.847228 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:03.847919 kubelet[1780]: E0213 20:21:03.847873 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:04.849409 kubelet[1780]: E0213 20:21:04.849300 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:05.114703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2815078837.mount: Deactivated successfully. Feb 13 20:21:05.204093 containerd[1477]: time="2025-02-13T20:21:05.202943918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:05.205765 containerd[1477]: time="2025-02-13T20:21:05.205670695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 20:21:05.208512 containerd[1477]: time="2025-02-13T20:21:05.206950208Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:05.212730 containerd[1477]: time="2025-02-13T20:21:05.211392101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:05.212730 containerd[1477]: time="2025-02-13T20:21:05.212554913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.984081115s" Feb 13 20:21:05.212730 containerd[1477]: time="2025-02-13T20:21:05.212597357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 20:21:05.215686 containerd[1477]: time="2025-02-13T20:21:05.215641622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 20:21:05.254346 containerd[1477]: time="2025-02-13T20:21:05.254275352Z" level=info msg="CreateContainer within sandbox \"6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:21:05.313369 containerd[1477]: time="2025-02-13T20:21:05.313290016Z" level=info msg="CreateContainer within sandbox \"6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff\"" Feb 13 20:21:05.316335 containerd[1477]: time="2025-02-13T20:21:05.316270112Z" level=info msg="StartContainer for \"44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff\"" Feb 13 20:21:05.490337 systemd[1]: Started cri-containerd-44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff.scope - libcontainer container 44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff. Feb 13 20:21:05.547009 containerd[1477]: time="2025-02-13T20:21:05.546756362Z" level=info msg="StartContainer for \"44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff\" returns successfully" Feb 13 20:21:05.706788 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:21:05.707140 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 20:21:05.850417 kubelet[1780]: E0213 20:21:05.850344 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:05.934372 update_engine[1451]: I20250213 20:21:05.933995 1451 update_attempter.cc:509] Updating boot flags... Feb 13 20:21:05.989384 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2572) Feb 13 20:21:06.322374 kubelet[1780]: I0213 20:21:06.321913 1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kx66g" podStartSLOduration=4.571128062 podStartE2EDuration="30.321887296s" podCreationTimestamp="2025-02-13 20:20:36 +0000 UTC" firstStartedPulling="2025-02-13 20:20:39.463731396 +0000 UTC m=+4.861189894" lastFinishedPulling="2025-02-13 20:21:05.214490623 +0000 UTC m=+30.611949128" observedRunningTime="2025-02-13 20:21:06.320395796 +0000 UTC m=+31.717854322" watchObservedRunningTime="2025-02-13 20:21:06.321887296 +0000 UTC m=+31.719345819" Feb 13 20:21:06.486520 containerd[1477]: time="2025-02-13T20:21:06.486409387Z" level=info msg="StopContainer for \"44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff\" with timeout 5 (s)" Feb 13 20:21:06.488996 containerd[1477]: time="2025-02-13T20:21:06.488927324Z" level=info msg="Stop container \"44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff\" with signal terminated" Feb 13 20:21:06.851538 kubelet[1780]: E0213 20:21:06.851458 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:07.548453 systemd[1]: cri-containerd-44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff.scope: Deactivated successfully. Feb 13 20:21:07.549210 systemd[1]: cri-containerd-44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff.scope: Consumed 1.019s CPU time. Feb 13 20:21:07.610901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff-rootfs.mount: Deactivated successfully. Feb 13 20:21:07.737285 containerd[1477]: time="2025-02-13T20:21:07.736920106Z" level=info msg="shim disconnected" id=44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff namespace=k8s.io Feb 13 20:21:07.737285 containerd[1477]: time="2025-02-13T20:21:07.737046269Z" level=warning msg="cleaning up after shim disconnected" id=44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff namespace=k8s.io Feb 13 20:21:07.737285 containerd[1477]: time="2025-02-13T20:21:07.737064015Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:21:07.780390 containerd[1477]: time="2025-02-13T20:21:07.779828673Z" level=info msg="StopContainer for \"44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff\" returns successfully" Feb 13 20:21:07.781339 containerd[1477]: time="2025-02-13T20:21:07.781117412Z" level=info msg="StopPodSandbox for \"6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1\"" Feb 13 20:21:07.781339 containerd[1477]: time="2025-02-13T20:21:07.781188181Z" level=info msg="Container to stop \"3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:21:07.781339 containerd[1477]: time="2025-02-13T20:21:07.781201398Z" level=info msg="Container to stop \"44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:21:07.781339 containerd[1477]: time="2025-02-13T20:21:07.781211754Z" level=info msg="Container to stop \"4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:21:07.784493 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1-shm.mount: Deactivated successfully. Feb 13 20:21:07.806750 systemd[1]: cri-containerd-6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1.scope: Deactivated successfully. Feb 13 20:21:07.852412 kubelet[1780]: E0213 20:21:07.852336 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:07.869777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1-rootfs.mount: Deactivated successfully. Feb 13 20:21:07.905927 containerd[1477]: time="2025-02-13T20:21:07.905692382Z" level=info msg="shim disconnected" id=6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1 namespace=k8s.io Feb 13 20:21:07.905927 containerd[1477]: time="2025-02-13T20:21:07.905775859Z" level=warning msg="cleaning up after shim disconnected" id=6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1 namespace=k8s.io Feb 13 20:21:07.905927 containerd[1477]: time="2025-02-13T20:21:07.905794340Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:21:07.950758 containerd[1477]: time="2025-02-13T20:21:07.950697896Z" level=info msg="TearDown network for sandbox \"6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1\" successfully" Feb 13 20:21:07.950758 containerd[1477]: time="2025-02-13T20:21:07.950745138Z" level=info msg="StopPodSandbox for \"6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1\" returns successfully" Feb 13 20:21:07.982106 kubelet[1780]: I0213 20:21:07.980670 1780 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-flexvol-driver-host\") pod \"25577796-c71f-47e3-bc93-42cc57d164d9\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " Feb 13 20:21:07.982106 kubelet[1780]: I0213 20:21:07.980746 1780 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/25577796-c71f-47e3-bc93-42cc57d164d9-node-certs\") pod \"25577796-c71f-47e3-bc93-42cc57d164d9\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " Feb 13 20:21:07.982106 kubelet[1780]: I0213 20:21:07.980779 1780 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-xtables-lock\") pod \"25577796-c71f-47e3-bc93-42cc57d164d9\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " Feb 13 20:21:07.982106 kubelet[1780]: I0213 20:21:07.980802 1780 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-policysync\") pod \"25577796-c71f-47e3-bc93-42cc57d164d9\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " Feb 13 20:21:07.982106 kubelet[1780]: I0213 20:21:07.980833 1780 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25577796-c71f-47e3-bc93-42cc57d164d9-tigera-ca-bundle\") pod \"25577796-c71f-47e3-bc93-42cc57d164d9\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " Feb 13 20:21:07.982106 kubelet[1780]: I0213 20:21:07.980907 1780 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-cni-log-dir\") pod \"25577796-c71f-47e3-bc93-42cc57d164d9\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " Feb 13 20:21:07.982612 kubelet[1780]: I0213 20:21:07.980934 1780 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-var-lib-calico\") pod \"25577796-c71f-47e3-bc93-42cc57d164d9\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " Feb 13 20:21:07.982612 kubelet[1780]: I0213 20:21:07.980951 1780 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-lib-modules\") pod \"25577796-c71f-47e3-bc93-42cc57d164d9\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " Feb 13 20:21:07.982612 kubelet[1780]: I0213 20:21:07.980970 1780 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-var-run-calico\") pod \"25577796-c71f-47e3-bc93-42cc57d164d9\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " Feb 13 20:21:07.982612 kubelet[1780]: I0213 20:21:07.980997 1780 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns5m6\" (UniqueName: \"kubernetes.io/projected/25577796-c71f-47e3-bc93-42cc57d164d9-kube-api-access-ns5m6\") pod \"25577796-c71f-47e3-bc93-42cc57d164d9\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " Feb 13 20:21:07.982612 kubelet[1780]: I0213 20:21:07.981015 1780 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-cni-bin-dir\") pod \"25577796-c71f-47e3-bc93-42cc57d164d9\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " Feb 13 20:21:07.982612 kubelet[1780]: I0213 20:21:07.981032 1780 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-cni-net-dir\") pod \"25577796-c71f-47e3-bc93-42cc57d164d9\" (UID: \"25577796-c71f-47e3-bc93-42cc57d164d9\") " Feb 13 20:21:07.982905 kubelet[1780]: I0213 20:21:07.981138 1780 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "25577796-c71f-47e3-bc93-42cc57d164d9" (UID: "25577796-c71f-47e3-bc93-42cc57d164d9"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:21:07.982905 kubelet[1780]: I0213 20:21:07.981275 1780 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "25577796-c71f-47e3-bc93-42cc57d164d9" (UID: "25577796-c71f-47e3-bc93-42cc57d164d9"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:21:07.990360 kubelet[1780]: I0213 20:21:07.985516 1780 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "25577796-c71f-47e3-bc93-42cc57d164d9" (UID: "25577796-c71f-47e3-bc93-42cc57d164d9"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:21:07.991272 kubelet[1780]: I0213 20:21:07.990414 1780 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25577796-c71f-47e3-bc93-42cc57d164d9-node-certs" (OuterVolumeSpecName: "node-certs") pod "25577796-c71f-47e3-bc93-42cc57d164d9" (UID: "25577796-c71f-47e3-bc93-42cc57d164d9"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 20:21:07.991921 kubelet[1780]: I0213 20:21:07.990476 1780 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "25577796-c71f-47e3-bc93-42cc57d164d9" (UID: "25577796-c71f-47e3-bc93-42cc57d164d9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:21:07.991921 kubelet[1780]: I0213 20:21:07.990502 1780 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "25577796-c71f-47e3-bc93-42cc57d164d9" (UID: "25577796-c71f-47e3-bc93-42cc57d164d9"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:21:07.994551 kubelet[1780]: I0213 20:21:07.992059 1780 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "25577796-c71f-47e3-bc93-42cc57d164d9" (UID: "25577796-c71f-47e3-bc93-42cc57d164d9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:21:07.992690 systemd[1]: var-lib-kubelet-pods-25577796\x2dc71f\x2d47e3\x2dbc93\x2d42cc57d164d9-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Feb 13 20:21:07.995049 kubelet[1780]: I0213 20:21:07.994832 1780 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-policysync" (OuterVolumeSpecName: "policysync") pod "25577796-c71f-47e3-bc93-42cc57d164d9" (UID: "25577796-c71f-47e3-bc93-42cc57d164d9"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:21:07.995272 kubelet[1780]: I0213 20:21:07.995081 1780 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "25577796-c71f-47e3-bc93-42cc57d164d9" (UID: "25577796-c71f-47e3-bc93-42cc57d164d9"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:21:07.995272 kubelet[1780]: I0213 20:21:07.995155 1780 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "25577796-c71f-47e3-bc93-42cc57d164d9" (UID: "25577796-c71f-47e3-bc93-42cc57d164d9"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:21:08.003070 kubelet[1780]: I0213 20:21:08.002831 1780 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25577796-c71f-47e3-bc93-42cc57d164d9-kube-api-access-ns5m6" (OuterVolumeSpecName: "kube-api-access-ns5m6") pod "25577796-c71f-47e3-bc93-42cc57d164d9" (UID: "25577796-c71f-47e3-bc93-42cc57d164d9"). InnerVolumeSpecName "kube-api-access-ns5m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:21:08.003714 systemd[1]: var-lib-kubelet-pods-25577796\x2dc71f\x2d47e3\x2dbc93\x2d42cc57d164d9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dns5m6.mount: Deactivated successfully. Feb 13 20:21:08.012562 kubelet[1780]: I0213 20:21:08.012006 1780 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25577796-c71f-47e3-bc93-42cc57d164d9-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "25577796-c71f-47e3-bc93-42cc57d164d9" (UID: "25577796-c71f-47e3-bc93-42cc57d164d9"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:21:08.047440 systemd[1]: Removed slice kubepods-besteffort-pod25577796_c71f_47e3_bc93_42cc57d164d9.slice - libcontainer container kubepods-besteffort-pod25577796_c71f_47e3_bc93_42cc57d164d9.slice. Feb 13 20:21:08.047597 systemd[1]: kubepods-besteffort-pod25577796_c71f_47e3_bc93_42cc57d164d9.slice: Consumed 2.064s CPU time. Feb 13 20:21:08.078925 kubelet[1780]: I0213 20:21:08.078757 1780 topology_manager.go:215] "Topology Admit Handler" podUID="23d04253-8be8-4cb4-bb0e-a066ac813ba4" podNamespace="calico-system" podName="calico-node-qg9c4" Feb 13 20:21:08.079145 kubelet[1780]: E0213 20:21:08.078957 1780 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="25577796-c71f-47e3-bc93-42cc57d164d9" containerName="install-cni" Feb 13 20:21:08.079145 kubelet[1780]: E0213 20:21:08.078973 1780 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="25577796-c71f-47e3-bc93-42cc57d164d9" containerName="calico-node" Feb 13 20:21:08.079145 kubelet[1780]: E0213 20:21:08.078987 1780 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="25577796-c71f-47e3-bc93-42cc57d164d9" containerName="flexvol-driver" Feb 13 20:21:08.079145 kubelet[1780]: I0213 20:21:08.079017 1780 memory_manager.go:354] "RemoveStaleState removing state" podUID="25577796-c71f-47e3-bc93-42cc57d164d9" containerName="calico-node" Feb 13 20:21:08.083711 kubelet[1780]: I0213 20:21:08.081486 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/23d04253-8be8-4cb4-bb0e-a066ac813ba4-policysync\") pod \"calico-node-qg9c4\" (UID: \"23d04253-8be8-4cb4-bb0e-a066ac813ba4\") " pod="calico-system/calico-node-qg9c4" Feb 13 20:21:08.083711 kubelet[1780]: I0213 20:21:08.081544 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/23d04253-8be8-4cb4-bb0e-a066ac813ba4-cni-log-dir\") pod \"calico-node-qg9c4\" (UID: \"23d04253-8be8-4cb4-bb0e-a066ac813ba4\") " pod="calico-system/calico-node-qg9c4" Feb 13 20:21:08.083711 kubelet[1780]: I0213 20:21:08.081578 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/23d04253-8be8-4cb4-bb0e-a066ac813ba4-node-certs\") pod \"calico-node-qg9c4\" (UID: \"23d04253-8be8-4cb4-bb0e-a066ac813ba4\") " pod="calico-system/calico-node-qg9c4" Feb 13 20:21:08.083711 kubelet[1780]: I0213 20:21:08.081651 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/23d04253-8be8-4cb4-bb0e-a066ac813ba4-var-lib-calico\") pod \"calico-node-qg9c4\" (UID: \"23d04253-8be8-4cb4-bb0e-a066ac813ba4\") " pod="calico-system/calico-node-qg9c4" Feb 13 20:21:08.083711 kubelet[1780]: I0213 20:21:08.081682 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/23d04253-8be8-4cb4-bb0e-a066ac813ba4-cni-bin-dir\") pod \"calico-node-qg9c4\" (UID: \"23d04253-8be8-4cb4-bb0e-a066ac813ba4\") " pod="calico-system/calico-node-qg9c4" Feb 13 20:21:08.084736 kubelet[1780]: I0213 20:21:08.081712 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23d04253-8be8-4cb4-bb0e-a066ac813ba4-xtables-lock\") pod \"calico-node-qg9c4\" (UID: \"23d04253-8be8-4cb4-bb0e-a066ac813ba4\") " pod="calico-system/calico-node-qg9c4" Feb 13 20:21:08.084736 kubelet[1780]: I0213 20:21:08.081740 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23d04253-8be8-4cb4-bb0e-a066ac813ba4-tigera-ca-bundle\") pod \"calico-node-qg9c4\" (UID: \"23d04253-8be8-4cb4-bb0e-a066ac813ba4\") " pod="calico-system/calico-node-qg9c4" Feb 13 20:21:08.084736 kubelet[1780]: I0213 20:21:08.081766 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/23d04253-8be8-4cb4-bb0e-a066ac813ba4-cni-net-dir\") pod \"calico-node-qg9c4\" (UID: \"23d04253-8be8-4cb4-bb0e-a066ac813ba4\") " pod="calico-system/calico-node-qg9c4" Feb 13 20:21:08.084736 kubelet[1780]: I0213 20:21:08.081790 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/23d04253-8be8-4cb4-bb0e-a066ac813ba4-flexvol-driver-host\") pod \"calico-node-qg9c4\" (UID: \"23d04253-8be8-4cb4-bb0e-a066ac813ba4\") " pod="calico-system/calico-node-qg9c4" Feb 13 20:21:08.084736 kubelet[1780]: I0213 20:21:08.081817 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23d04253-8be8-4cb4-bb0e-a066ac813ba4-lib-modules\") pod \"calico-node-qg9c4\" (UID: \"23d04253-8be8-4cb4-bb0e-a066ac813ba4\") " pod="calico-system/calico-node-qg9c4" Feb 13 20:21:08.084948 kubelet[1780]: I0213 20:21:08.082030 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7hbv\" (UniqueName: \"kubernetes.io/projected/23d04253-8be8-4cb4-bb0e-a066ac813ba4-kube-api-access-n7hbv\") pod \"calico-node-qg9c4\" (UID: \"23d04253-8be8-4cb4-bb0e-a066ac813ba4\") " pod="calico-system/calico-node-qg9c4" Feb 13 20:21:08.084948 kubelet[1780]: I0213 20:21:08.082089 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/23d04253-8be8-4cb4-bb0e-a066ac813ba4-var-run-calico\") pod \"calico-node-qg9c4\" (UID: \"23d04253-8be8-4cb4-bb0e-a066ac813ba4\") " pod="calico-system/calico-node-qg9c4" Feb 13 20:21:08.084948 kubelet[1780]: I0213 20:21:08.082125 1780 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-cni-log-dir\") on node \"64.23.133.95\" DevicePath \"\"" Feb 13 20:21:08.084948 kubelet[1780]: I0213 20:21:08.082144 1780 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-var-lib-calico\") on node \"64.23.133.95\" DevicePath \"\"" Feb 13 20:21:08.084948 kubelet[1780]: I0213 20:21:08.082160 1780 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-lib-modules\") on node \"64.23.133.95\" DevicePath \"\"" Feb 13 20:21:08.084948 kubelet[1780]: I0213 20:21:08.082174 1780 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-var-run-calico\") on node \"64.23.133.95\" DevicePath \"\"" Feb 13 20:21:08.084948 kubelet[1780]: I0213 20:21:08.082188 1780 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ns5m6\" (UniqueName: \"kubernetes.io/projected/25577796-c71f-47e3-bc93-42cc57d164d9-kube-api-access-ns5m6\") on node \"64.23.133.95\" DevicePath \"\"" Feb 13 20:21:08.085246 kubelet[1780]: I0213 20:21:08.082212 1780 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-cni-net-dir\") on node \"64.23.133.95\" DevicePath \"\"" Feb 13 20:21:08.085246 kubelet[1780]: I0213 20:21:08.082224 1780 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-cni-bin-dir\") on node \"64.23.133.95\" DevicePath \"\"" Feb 13 20:21:08.085246 kubelet[1780]: I0213 20:21:08.082238 1780 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-flexvol-driver-host\") on node \"64.23.133.95\" DevicePath \"\"" Feb 13 20:21:08.085246 kubelet[1780]: I0213 20:21:08.082251 1780 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-xtables-lock\") on node \"64.23.133.95\" DevicePath \"\"" Feb 13 20:21:08.085246 kubelet[1780]: I0213 20:21:08.082262 1780 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/25577796-c71f-47e3-bc93-42cc57d164d9-policysync\") on node \"64.23.133.95\" DevicePath \"\"" Feb 13 20:21:08.085246 kubelet[1780]: I0213 20:21:08.082280 1780 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/25577796-c71f-47e3-bc93-42cc57d164d9-node-certs\") on node \"64.23.133.95\" DevicePath \"\"" Feb 13 20:21:08.085246 kubelet[1780]: I0213 20:21:08.082292 1780 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25577796-c71f-47e3-bc93-42cc57d164d9-tigera-ca-bundle\") on node \"64.23.133.95\" DevicePath \"\"" Feb 13 20:21:08.095133 systemd[1]: Created slice kubepods-besteffort-pod23d04253_8be8_4cb4_bb0e_a066ac813ba4.slice - libcontainer container kubepods-besteffort-pod23d04253_8be8_4cb4_bb0e_a066ac813ba4.slice. Feb 13 20:21:08.302766 kubelet[1780]: I0213 20:21:08.302709 1780 scope.go:117] "RemoveContainer" containerID="44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff" Feb 13 20:21:08.312938 containerd[1477]: time="2025-02-13T20:21:08.312589656Z" level=info msg="RemoveContainer for \"44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff\"" Feb 13 20:21:08.324898 containerd[1477]: time="2025-02-13T20:21:08.324797031Z" level=info msg="RemoveContainer for \"44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff\" returns successfully" Feb 13 20:21:08.328591 kubelet[1780]: I0213 20:21:08.328197 1780 scope.go:117] "RemoveContainer" containerID="3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d" Feb 13 20:21:08.338381 containerd[1477]: time="2025-02-13T20:21:08.338192499Z" level=info msg="RemoveContainer for \"3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d\"" Feb 13 20:21:08.356963 containerd[1477]: time="2025-02-13T20:21:08.356719667Z" level=info msg="RemoveContainer for \"3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d\" returns successfully" Feb 13 20:21:08.364460 kubelet[1780]: I0213 20:21:08.364246 1780 scope.go:117] "RemoveContainer" containerID="4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc" Feb 13 20:21:08.370575 containerd[1477]: time="2025-02-13T20:21:08.370022967Z" level=info msg="RemoveContainer for \"4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc\"" Feb 13 20:21:08.378179 containerd[1477]: time="2025-02-13T20:21:08.377066451Z" level=info msg="RemoveContainer for \"4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc\" returns successfully" Feb 13 20:21:08.378388 kubelet[1780]: I0213 20:21:08.377674 1780 scope.go:117] "RemoveContainer" containerID="44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff" Feb 13 20:21:08.379025 containerd[1477]: time="2025-02-13T20:21:08.378950580Z" level=error msg="ContainerStatus for \"44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff\": not found" Feb 13 20:21:08.379310 kubelet[1780]: E0213 20:21:08.379275 1780 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff\": not found" containerID="44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff" Feb 13 20:21:08.379520 kubelet[1780]: I0213 20:21:08.379324 1780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff"} err="failed to get container status \"44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"44fe54daee4f85e9a8f172b527a482ac39e16230e12ae1102f6e4125e206d9ff\": not found" Feb 13 20:21:08.379520 kubelet[1780]: I0213 20:21:08.379362 1780 scope.go:117] "RemoveContainer" containerID="3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d" Feb 13 20:21:08.382099 containerd[1477]: time="2025-02-13T20:21:08.380448979Z" level=error msg="ContainerStatus for \"3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d\": not found" Feb 13 20:21:08.382099 containerd[1477]: time="2025-02-13T20:21:08.381326989Z" level=error msg="ContainerStatus for \"4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc\": not found" Feb 13 20:21:08.382322 kubelet[1780]: E0213 20:21:08.380774 1780 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d\": not found" containerID="3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d" Feb 13 20:21:08.382322 kubelet[1780]: I0213 20:21:08.380830 1780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d"} err="failed to get container status \"3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d\": rpc error: code = NotFound desc = an error occurred when try to find container \"3a0fea7cb0c91021d4b3e71ef0b4298d6d28c6aae695883b1ab6805a90a3392d\": not found" Feb 13 20:21:08.382322 kubelet[1780]: I0213 20:21:08.380892 1780 scope.go:117] "RemoveContainer" containerID="4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc" Feb 13 20:21:08.382322 kubelet[1780]: E0213 20:21:08.381563 1780 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc\": not found" containerID="4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc" Feb 13 20:21:08.382322 kubelet[1780]: I0213 20:21:08.381597 1780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc"} err="failed to get container status \"4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc\": rpc error: code = NotFound desc = an error occurred when try to find container \"4186e5f5cfd3ec99374defc48641ccd9092f369addb023afce0e416b020b6adc\": not found" Feb 13 20:21:08.402795 kubelet[1780]: E0213 20:21:08.402645 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:21:08.405070 containerd[1477]: time="2025-02-13T20:21:08.404610552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qg9c4,Uid:23d04253-8be8-4cb4-bb0e-a066ac813ba4,Namespace:calico-system,Attempt:0,}" Feb 13 20:21:08.514023 containerd[1477]: time="2025-02-13T20:21:08.510931277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:21:08.514023 containerd[1477]: time="2025-02-13T20:21:08.511975475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:21:08.514023 containerd[1477]: time="2025-02-13T20:21:08.512029753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:08.516447 containerd[1477]: time="2025-02-13T20:21:08.515515380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:08.579323 systemd[1]: Started cri-containerd-4e3330fc583a56ce5dcfaa408bc6e28bef410cb25ae93d0274c9c34927ba67dc.scope - libcontainer container 4e3330fc583a56ce5dcfaa408bc6e28bef410cb25ae93d0274c9c34927ba67dc. Feb 13 20:21:08.640087 systemd[1]: var-lib-kubelet-pods-25577796\x2dc71f\x2d47e3\x2dbc93\x2d42cc57d164d9-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Feb 13 20:21:08.713687 containerd[1477]: time="2025-02-13T20:21:08.713496979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qg9c4,Uid:23d04253-8be8-4cb4-bb0e-a066ac813ba4,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e3330fc583a56ce5dcfaa408bc6e28bef410cb25ae93d0274c9c34927ba67dc\"" Feb 13 20:21:08.715901 kubelet[1780]: E0213 20:21:08.715837 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:21:08.723768 containerd[1477]: time="2025-02-13T20:21:08.723567253Z" level=info msg="CreateContainer within sandbox \"4e3330fc583a56ce5dcfaa408bc6e28bef410cb25ae93d0274c9c34927ba67dc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:21:08.734920 containerd[1477]: time="2025-02-13T20:21:08.734560120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:08.739236 containerd[1477]: time="2025-02-13T20:21:08.739146249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Feb 13 20:21:08.741347 containerd[1477]: time="2025-02-13T20:21:08.741291825Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:08.750368 containerd[1477]: time="2025-02-13T20:21:08.750267006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:08.756350 containerd[1477]: time="2025-02-13T20:21:08.752417815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.536462175s" Feb 13 20:21:08.756350 containerd[1477]: time="2025-02-13T20:21:08.752490558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 20:21:08.763162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1935568870.mount: Deactivated successfully. Feb 13 20:21:08.771472 containerd[1477]: time="2025-02-13T20:21:08.771262971Z" level=info msg="CreateContainer within sandbox \"4e3330fc583a56ce5dcfaa408bc6e28bef410cb25ae93d0274c9c34927ba67dc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d0845b78be039f723b8a6f488934124de7fa8e65b69059270e804a3f51f83cb2\"" Feb 13 20:21:08.772899 containerd[1477]: time="2025-02-13T20:21:08.772350321Z" level=info msg="StartContainer for \"d0845b78be039f723b8a6f488934124de7fa8e65b69059270e804a3f51f83cb2\"" Feb 13 20:21:08.806338 containerd[1477]: time="2025-02-13T20:21:08.805138524Z" level=info msg="CreateContainer within sandbox \"5bd2e24ca230494abe66e60de1b05555783559c718f4c6c42f7bf459e481b681\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 20:21:08.829224 containerd[1477]: time="2025-02-13T20:21:08.829154912Z" level=info msg="CreateContainer within sandbox \"5bd2e24ca230494abe66e60de1b05555783559c718f4c6c42f7bf459e481b681\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e6c0fa1ac0002013b11751985f9ce5900cb52641712023f8bc3a385a8aa7ebea\"" Feb 13 20:21:08.831736 containerd[1477]: time="2025-02-13T20:21:08.831661929Z" level=info msg="StartContainer for \"e6c0fa1ac0002013b11751985f9ce5900cb52641712023f8bc3a385a8aa7ebea\"" Feb 13 20:21:08.853729 kubelet[1780]: E0213 20:21:08.853627 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:08.879395 systemd[1]: Started cri-containerd-d0845b78be039f723b8a6f488934124de7fa8e65b69059270e804a3f51f83cb2.scope - libcontainer container d0845b78be039f723b8a6f488934124de7fa8e65b69059270e804a3f51f83cb2. Feb 13 20:21:08.918324 systemd[1]: Started cri-containerd-e6c0fa1ac0002013b11751985f9ce5900cb52641712023f8bc3a385a8aa7ebea.scope - libcontainer container e6c0fa1ac0002013b11751985f9ce5900cb52641712023f8bc3a385a8aa7ebea. Feb 13 20:21:09.003950 containerd[1477]: time="2025-02-13T20:21:09.002274381Z" level=info msg="StartContainer for \"d0845b78be039f723b8a6f488934124de7fa8e65b69059270e804a3f51f83cb2\" returns successfully" Feb 13 20:21:09.109140 systemd[1]: cri-containerd-d0845b78be039f723b8a6f488934124de7fa8e65b69059270e804a3f51f83cb2.scope: Deactivated successfully. Feb 13 20:21:09.176597 containerd[1477]: time="2025-02-13T20:21:09.176352014Z" level=info msg="StartContainer for \"e6c0fa1ac0002013b11751985f9ce5900cb52641712023f8bc3a385a8aa7ebea\" returns successfully" Feb 13 20:21:09.303021 containerd[1477]: time="2025-02-13T20:21:09.302401703Z" level=info msg="shim disconnected" id=d0845b78be039f723b8a6f488934124de7fa8e65b69059270e804a3f51f83cb2 namespace=k8s.io Feb 13 20:21:09.303021 containerd[1477]: time="2025-02-13T20:21:09.302833080Z" level=warning msg="cleaning up after shim disconnected" id=d0845b78be039f723b8a6f488934124de7fa8e65b69059270e804a3f51f83cb2 namespace=k8s.io Feb 13 20:21:09.303021 containerd[1477]: time="2025-02-13T20:21:09.302910709Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:21:09.320189 kubelet[1780]: E0213 20:21:09.320043 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:21:09.323897 kubelet[1780]: E0213 20:21:09.323751 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:21:09.359695 kubelet[1780]: I0213 20:21:09.356220 1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75c5fb64cf-mj8wz" podStartSLOduration=1.5807576129999998 podStartE2EDuration="13.356190923s" podCreationTimestamp="2025-02-13 20:20:56 +0000 UTC" firstStartedPulling="2025-02-13 20:20:56.989638201 +0000 UTC m=+22.387096700" lastFinishedPulling="2025-02-13 20:21:08.765071503 +0000 UTC m=+34.162530010" observedRunningTime="2025-02-13 20:21:09.355490328 +0000 UTC m=+34.752948856" watchObservedRunningTime="2025-02-13 20:21:09.356190923 +0000 UTC m=+34.753649451" Feb 13 20:21:09.854718 kubelet[1780]: E0213 20:21:09.854618 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:10.035354 containerd[1477]: time="2025-02-13T20:21:10.035269572Z" level=info msg="StopPodSandbox for \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\"" Feb 13 20:21:10.036926 containerd[1477]: time="2025-02-13T20:21:10.036702682Z" level=info msg="StopPodSandbox for \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\"" Feb 13 20:21:10.041829 kubelet[1780]: I0213 20:21:10.040795 1780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25577796-c71f-47e3-bc93-42cc57d164d9" path="/var/lib/kubelet/pods/25577796-c71f-47e3-bc93-42cc57d164d9/volumes" Feb 13 20:21:10.359463 containerd[1477]: 2025-02-13 20:21:10.204 [INFO][2967] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Feb 13 20:21:10.359463 containerd[1477]: 2025-02-13 20:21:10.206 [INFO][2967] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" iface="eth0" netns="/var/run/netns/cni-030a80b5-4f42-4ad1-f507-602454882dc9" Feb 13 20:21:10.359463 containerd[1477]: 2025-02-13 20:21:10.207 [INFO][2967] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" iface="eth0" netns="/var/run/netns/cni-030a80b5-4f42-4ad1-f507-602454882dc9" Feb 13 20:21:10.359463 containerd[1477]: 2025-02-13 20:21:10.208 [INFO][2967] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" iface="eth0" netns="/var/run/netns/cni-030a80b5-4f42-4ad1-f507-602454882dc9" Feb 13 20:21:10.359463 containerd[1477]: 2025-02-13 20:21:10.208 [INFO][2967] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Feb 13 20:21:10.359463 containerd[1477]: 2025-02-13 20:21:10.208 [INFO][2967] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Feb 13 20:21:10.359463 containerd[1477]: 2025-02-13 20:21:10.288 [INFO][2979] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" HandleID="k8s-pod-network.dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Workload="64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0" Feb 13 20:21:10.359463 containerd[1477]: 2025-02-13 20:21:10.289 [INFO][2979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:21:10.359463 containerd[1477]: 2025-02-13 20:21:10.289 [INFO][2979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:21:10.359463 containerd[1477]: 2025-02-13 20:21:10.331 [WARNING][2979] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" HandleID="k8s-pod-network.dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Workload="64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0" Feb 13 20:21:10.359463 containerd[1477]: 2025-02-13 20:21:10.331 [INFO][2979] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" HandleID="k8s-pod-network.dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Workload="64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0" Feb 13 20:21:10.359463 containerd[1477]: 2025-02-13 20:21:10.339 [INFO][2979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:21:10.359463 containerd[1477]: 2025-02-13 20:21:10.347 [INFO][2967] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Feb 13 20:21:10.359463 containerd[1477]: time="2025-02-13T20:21:10.354061749Z" level=info msg="TearDown network for sandbox \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\" successfully" Feb 13 20:21:10.359463 containerd[1477]: time="2025-02-13T20:21:10.354173431Z" level=info msg="StopPodSandbox for \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\" returns successfully" Feb 13 20:21:10.366299 kubelet[1780]: I0213 20:21:10.354054 1780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:21:10.366299 kubelet[1780]: E0213 20:21:10.355005 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:21:10.366299 kubelet[1780]: E0213 20:21:10.355551 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:21:10.367647 systemd[1]: run-netns-cni\x2d030a80b5\x2d4f42\x2d4ad1\x2df507\x2d602454882dc9.mount: Deactivated successfully. Feb 13 20:21:10.370775 containerd[1477]: time="2025-02-13T20:21:10.368774140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d558c6c6c-njt6l,Uid:8386e5da-6e1b-4bc1-b820-a5872769500e,Namespace:calico-system,Attempt:1,}" Feb 13 20:21:10.374137 containerd[1477]: time="2025-02-13T20:21:10.374068986Z" level=info msg="CreateContainer within sandbox \"4e3330fc583a56ce5dcfaa408bc6e28bef410cb25ae93d0274c9c34927ba67dc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:21:10.429798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount502312564.mount: Deactivated successfully. Feb 13 20:21:10.430769 containerd[1477]: 2025-02-13 20:21:10.217 [INFO][2968] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Feb 13 20:21:10.430769 containerd[1477]: 2025-02-13 20:21:10.218 [INFO][2968] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" iface="eth0" netns="/var/run/netns/cni-a524f6ff-2432-57d2-f582-94f8d7141c3a" Feb 13 20:21:10.430769 containerd[1477]: 2025-02-13 20:21:10.218 [INFO][2968] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" iface="eth0" netns="/var/run/netns/cni-a524f6ff-2432-57d2-f582-94f8d7141c3a" Feb 13 20:21:10.430769 containerd[1477]: 2025-02-13 20:21:10.219 [INFO][2968] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" iface="eth0" netns="/var/run/netns/cni-a524f6ff-2432-57d2-f582-94f8d7141c3a" Feb 13 20:21:10.430769 containerd[1477]: 2025-02-13 20:21:10.219 [INFO][2968] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Feb 13 20:21:10.430769 containerd[1477]: 2025-02-13 20:21:10.219 [INFO][2968] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Feb 13 20:21:10.430769 containerd[1477]: 2025-02-13 20:21:10.334 [INFO][2983] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" HandleID="k8s-pod-network.af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Workload="64.23.133.95-k8s-csi--node--driver--hvn65-eth0" Feb 13 20:21:10.430769 containerd[1477]: 2025-02-13 20:21:10.335 [INFO][2983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:21:10.430769 containerd[1477]: 2025-02-13 20:21:10.340 [INFO][2983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:21:10.430769 containerd[1477]: 2025-02-13 20:21:10.382 [WARNING][2983] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" HandleID="k8s-pod-network.af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Workload="64.23.133.95-k8s-csi--node--driver--hvn65-eth0" Feb 13 20:21:10.430769 containerd[1477]: 2025-02-13 20:21:10.382 [INFO][2983] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" HandleID="k8s-pod-network.af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Workload="64.23.133.95-k8s-csi--node--driver--hvn65-eth0" Feb 13 20:21:10.430769 containerd[1477]: 2025-02-13 20:21:10.404 [INFO][2983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:21:10.430769 containerd[1477]: 2025-02-13 20:21:10.422 [INFO][2968] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Feb 13 20:21:10.434228 containerd[1477]: time="2025-02-13T20:21:10.433490329Z" level=info msg="TearDown network for sandbox \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\" successfully" Feb 13 20:21:10.437029 containerd[1477]: time="2025-02-13T20:21:10.436945917Z" level=info msg="StopPodSandbox for \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\" returns successfully" Feb 13 20:21:10.445365 containerd[1477]: time="2025-02-13T20:21:10.445304707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hvn65,Uid:ce22ba38-b4f8-4031-88e9-0196a2ef8f62,Namespace:calico-system,Attempt:1,}" Feb 13 20:21:10.467939 containerd[1477]: time="2025-02-13T20:21:10.467693997Z" level=info msg="CreateContainer within sandbox \"4e3330fc583a56ce5dcfaa408bc6e28bef410cb25ae93d0274c9c34927ba67dc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"79ad083d4d637b86759ef465c977deaa5b400135318f318e1877ed78630805fe\"" Feb 13 20:21:10.470559 containerd[1477]: time="2025-02-13T20:21:10.470351034Z" level=info msg="StartContainer for \"79ad083d4d637b86759ef465c977deaa5b400135318f318e1877ed78630805fe\"" Feb 13 20:21:10.552253 systemd[1]: Started cri-containerd-79ad083d4d637b86759ef465c977deaa5b400135318f318e1877ed78630805fe.scope - libcontainer container 79ad083d4d637b86759ef465c977deaa5b400135318f318e1877ed78630805fe. Feb 13 20:21:10.616832 systemd[1]: run-netns-cni\x2da524f6ff\x2d2432\x2d57d2\x2df582\x2d94f8d7141c3a.mount: Deactivated successfully. Feb 13 20:21:10.663721 containerd[1477]: time="2025-02-13T20:21:10.661524433Z" level=info msg="StartContainer for \"79ad083d4d637b86759ef465c977deaa5b400135318f318e1877ed78630805fe\" returns successfully" Feb 13 20:21:10.856238 kubelet[1780]: E0213 20:21:10.856139 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:10.974144 systemd[1]: Started sshd@7-64.23.133.95:22-194.0.234.37:41946.service - OpenSSH per-connection server daemon (194.0.234.37:41946). Feb 13 20:21:11.008878 systemd-networkd[1377]: cali17638900755: Link UP Feb 13 20:21:11.009989 systemd-networkd[1377]: cali17638900755: Gained carrier Feb 13 20:21:11.099833 containerd[1477]: 2025-02-13 20:21:10.571 [INFO][2992] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:21:11.099833 containerd[1477]: 2025-02-13 20:21:10.677 [INFO][2992] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0 calico-kube-controllers-5d558c6c6c- calico-system 8386e5da-6e1b-4bc1-b820-a5872769500e 1254 0 2025-02-13 20:20:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d558c6c6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 64.23.133.95 calico-kube-controllers-5d558c6c6c-njt6l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali17638900755 [] []}} ContainerID="25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe" Namespace="calico-system" Pod="calico-kube-controllers-5d558c6c6c-njt6l" WorkloadEndpoint="64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-" Feb 13 20:21:11.099833 containerd[1477]: 2025-02-13 20:21:10.677 [INFO][2992] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe" Namespace="calico-system" Pod="calico-kube-controllers-5d558c6c6c-njt6l" WorkloadEndpoint="64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0" Feb 13 20:21:11.099833 containerd[1477]: 2025-02-13 20:21:10.782 [INFO][3049] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe" HandleID="k8s-pod-network.25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe" Workload="64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0" Feb 13 20:21:11.099833 containerd[1477]: 2025-02-13 20:21:10.825 [INFO][3049] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe" HandleID="k8s-pod-network.25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe" Workload="64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000507c0), Attrs:map[string]string{"namespace":"calico-system", "node":"64.23.133.95", "pod":"calico-kube-controllers-5d558c6c6c-njt6l", "timestamp":"2025-02-13 20:21:10.782841325 +0000 UTC"}, Hostname:"64.23.133.95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:21:11.099833 containerd[1477]: 2025-02-13 20:21:10.826 [INFO][3049] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:21:11.099833 containerd[1477]: 2025-02-13 20:21:10.826 [INFO][3049] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:21:11.099833 containerd[1477]: 2025-02-13 20:21:10.826 [INFO][3049] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '64.23.133.95' Feb 13 20:21:11.099833 containerd[1477]: 2025-02-13 20:21:10.832 [INFO][3049] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe" host="64.23.133.95" Feb 13 20:21:11.099833 containerd[1477]: 2025-02-13 20:21:10.861 [INFO][3049] ipam/ipam.go 372: Looking up existing affinities for host host="64.23.133.95" Feb 13 20:21:11.099833 containerd[1477]: 2025-02-13 20:21:10.889 [INFO][3049] ipam/ipam.go 489: Trying affinity for 192.168.103.0/26 host="64.23.133.95" Feb 13 20:21:11.099833 containerd[1477]: 2025-02-13 20:21:10.896 [INFO][3049] ipam/ipam.go 155: Attempting to load block cidr=192.168.103.0/26 host="64.23.133.95" Feb 13 20:21:11.099833 containerd[1477]: 2025-02-13 20:21:10.908 [INFO][3049] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="64.23.133.95" Feb 13 20:21:11.099833 containerd[1477]: 2025-02-13 20:21:10.908 [INFO][3049] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe" host="64.23.133.95" Feb 13 20:21:11.099833 containerd[1477]: 2025-02-13 20:21:10.914 [INFO][3049] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe Feb 13 20:21:11.099833 containerd[1477]: 2025-02-13 20:21:10.931 [INFO][3049] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe" host="64.23.133.95" Feb 13 20:21:11.099833 containerd[1477]: 2025-02-13 20:21:10.952 [INFO][3049] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.103.1/26] block=192.168.103.0/26 handle="k8s-pod-network.25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe" host="64.23.133.95" Feb 13 20:21:11.099833 containerd[1477]: 2025-02-13 20:21:10.952 [INFO][3049] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.103.1/26] handle="k8s-pod-network.25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe" host="64.23.133.95" Feb 13 20:21:11.099833 containerd[1477]: 2025-02-13 20:21:10.952 [INFO][3049] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:21:11.099833 containerd[1477]: 2025-02-13 20:21:10.952 [INFO][3049] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.1/26] IPv6=[] ContainerID="25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe" HandleID="k8s-pod-network.25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe" Workload="64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0" Feb 13 20:21:11.101600 containerd[1477]: 2025-02-13 20:21:10.962 [INFO][2992] cni-plugin/k8s.go 386: Populated endpoint ContainerID="25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe" Namespace="calico-system" Pod="calico-kube-controllers-5d558c6c6c-njt6l" WorkloadEndpoint="64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0", GenerateName:"calico-kube-controllers-5d558c6c6c-", Namespace:"calico-system", SelfLink:"", UID:"8386e5da-6e1b-4bc1-b820-a5872769500e", ResourceVersion:"1254", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 20, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d558c6c6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.133.95", ContainerID:"", Pod:"calico-kube-controllers-5d558c6c6c-njt6l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali17638900755", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:21:11.101600 containerd[1477]: 2025-02-13 20:21:10.964 [INFO][2992] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.103.1/32] ContainerID="25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe" Namespace="calico-system" Pod="calico-kube-controllers-5d558c6c6c-njt6l" WorkloadEndpoint="64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0" Feb 13 20:21:11.101600 containerd[1477]: 2025-02-13 20:21:10.965 [INFO][2992] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17638900755 ContainerID="25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe" Namespace="calico-system" Pod="calico-kube-controllers-5d558c6c6c-njt6l" WorkloadEndpoint="64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0" Feb 13 20:21:11.101600 containerd[1477]: 2025-02-13 20:21:11.029 [INFO][2992] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe" Namespace="calico-system" Pod="calico-kube-controllers-5d558c6c6c-njt6l" WorkloadEndpoint="64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0" Feb 13 20:21:11.101600 containerd[1477]: 2025-02-13 20:21:11.037 [INFO][2992] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe" Namespace="calico-system" Pod="calico-kube-controllers-5d558c6c6c-njt6l" WorkloadEndpoint="64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0", GenerateName:"calico-kube-controllers-5d558c6c6c-", Namespace:"calico-system", SelfLink:"", UID:"8386e5da-6e1b-4bc1-b820-a5872769500e", ResourceVersion:"1254", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 20, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d558c6c6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.133.95", ContainerID:"25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe", Pod:"calico-kube-controllers-5d558c6c6c-njt6l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali17638900755", MAC:"de:a8:a6:c3:d3:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:21:11.101600 containerd[1477]: 2025-02-13 20:21:11.080 [INFO][2992] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe" Namespace="calico-system" Pod="calico-kube-controllers-5d558c6c6c-njt6l" WorkloadEndpoint="64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0" Feb 13 20:21:11.192978 containerd[1477]: time="2025-02-13T20:21:11.174034777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:21:11.192978 containerd[1477]: time="2025-02-13T20:21:11.174276905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:21:11.192978 containerd[1477]: time="2025-02-13T20:21:11.174321457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:11.192978 containerd[1477]: time="2025-02-13T20:21:11.174513550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:11.221443 systemd-networkd[1377]: cali6c47fd70776: Link UP Feb 13 20:21:11.243594 systemd-networkd[1377]: cali6c47fd70776: Gained carrier Feb 13 20:21:11.304172 containerd[1477]: 2025-02-13 20:21:10.671 [INFO][3006] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:21:11.304172 containerd[1477]: 2025-02-13 20:21:10.717 [INFO][3006] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {64.23.133.95-k8s-csi--node--driver--hvn65-eth0 csi-node-driver- calico-system ce22ba38-b4f8-4031-88e9-0196a2ef8f62 1255 0 2025-02-13 20:20:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 64.23.133.95 csi-node-driver-hvn65 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6c47fd70776 [] []}} ContainerID="802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02" Namespace="calico-system" Pod="csi-node-driver-hvn65" WorkloadEndpoint="64.23.133.95-k8s-csi--node--driver--hvn65-" Feb 13 20:21:11.304172 containerd[1477]: 2025-02-13 20:21:10.717 [INFO][3006] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02" Namespace="calico-system" Pod="csi-node-driver-hvn65" WorkloadEndpoint="64.23.133.95-k8s-csi--node--driver--hvn65-eth0" Feb 13 20:21:11.304172 containerd[1477]: 2025-02-13 20:21:10.871 [INFO][3054] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02" HandleID="k8s-pod-network.802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02" Workload="64.23.133.95-k8s-csi--node--driver--hvn65-eth0" Feb 13 20:21:11.304172 containerd[1477]: 2025-02-13 20:21:10.914 [INFO][3054] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02" HandleID="k8s-pod-network.802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02" Workload="64.23.133.95-k8s-csi--node--driver--hvn65-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fceb0), Attrs:map[string]string{"namespace":"calico-system", "node":"64.23.133.95", "pod":"csi-node-driver-hvn65", "timestamp":"2025-02-13 20:21:10.871596606 +0000 UTC"}, Hostname:"64.23.133.95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:21:11.304172 containerd[1477]: 2025-02-13 20:21:10.914 [INFO][3054] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:21:11.304172 containerd[1477]: 2025-02-13 20:21:10.953 [INFO][3054] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:21:11.304172 containerd[1477]: 2025-02-13 20:21:10.953 [INFO][3054] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '64.23.133.95' Feb 13 20:21:11.304172 containerd[1477]: 2025-02-13 20:21:10.961 [INFO][3054] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02" host="64.23.133.95" Feb 13 20:21:11.304172 containerd[1477]: 2025-02-13 20:21:10.997 [INFO][3054] ipam/ipam.go 372: Looking up existing affinities for host host="64.23.133.95" Feb 13 20:21:11.304172 containerd[1477]: 2025-02-13 20:21:11.108 [INFO][3054] ipam/ipam.go 489: Trying affinity for 192.168.103.0/26 host="64.23.133.95" Feb 13 20:21:11.304172 containerd[1477]: 2025-02-13 20:21:11.115 [INFO][3054] ipam/ipam.go 155: Attempting to load block cidr=192.168.103.0/26 host="64.23.133.95" Feb 13 20:21:11.304172 containerd[1477]: 2025-02-13 20:21:11.123 [INFO][3054] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="64.23.133.95" Feb 13 20:21:11.304172 containerd[1477]: 2025-02-13 20:21:11.123 [INFO][3054] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02" host="64.23.133.95" Feb 13 20:21:11.304172 containerd[1477]: 2025-02-13 20:21:11.133 [INFO][3054] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02 Feb 13 20:21:11.304172 containerd[1477]: 2025-02-13 20:21:11.143 [INFO][3054] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02" host="64.23.133.95" Feb 13 20:21:11.304172 containerd[1477]: 2025-02-13 20:21:11.166 [INFO][3054] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.103.2/26] block=192.168.103.0/26 handle="k8s-pod-network.802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02" host="64.23.133.95" Feb 13 20:21:11.304172 containerd[1477]: 2025-02-13 20:21:11.166 [INFO][3054] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.103.2/26] handle="k8s-pod-network.802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02" host="64.23.133.95" Feb 13 20:21:11.304172 containerd[1477]: 2025-02-13 20:21:11.166 [INFO][3054] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:21:11.304172 containerd[1477]: 2025-02-13 20:21:11.166 [INFO][3054] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.2/26] IPv6=[] ContainerID="802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02" HandleID="k8s-pod-network.802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02" Workload="64.23.133.95-k8s-csi--node--driver--hvn65-eth0" Feb 13 20:21:11.307523 containerd[1477]: 2025-02-13 20:21:11.216 [INFO][3006] cni-plugin/k8s.go 386: Populated endpoint ContainerID="802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02" Namespace="calico-system" Pod="csi-node-driver-hvn65" WorkloadEndpoint="64.23.133.95-k8s-csi--node--driver--hvn65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.133.95-k8s-csi--node--driver--hvn65-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce22ba38-b4f8-4031-88e9-0196a2ef8f62", ResourceVersion:"1255", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 20, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.133.95", ContainerID:"", Pod:"csi-node-driver-hvn65", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6c47fd70776", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:21:11.307523 containerd[1477]: 2025-02-13 20:21:11.216 [INFO][3006] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.103.2/32] ContainerID="802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02" Namespace="calico-system" Pod="csi-node-driver-hvn65" WorkloadEndpoint="64.23.133.95-k8s-csi--node--driver--hvn65-eth0" Feb 13 20:21:11.307523 containerd[1477]: 2025-02-13 20:21:11.216 [INFO][3006] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c47fd70776 ContainerID="802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02" Namespace="calico-system" Pod="csi-node-driver-hvn65" WorkloadEndpoint="64.23.133.95-k8s-csi--node--driver--hvn65-eth0" Feb 13 20:21:11.307523 containerd[1477]: 2025-02-13 20:21:11.247 [INFO][3006] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02" Namespace="calico-system" Pod="csi-node-driver-hvn65" WorkloadEndpoint="64.23.133.95-k8s-csi--node--driver--hvn65-eth0" Feb 13 20:21:11.307523 containerd[1477]: 2025-02-13 20:21:11.250 [INFO][3006] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02" Namespace="calico-system" Pod="csi-node-driver-hvn65" WorkloadEndpoint="64.23.133.95-k8s-csi--node--driver--hvn65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.133.95-k8s-csi--node--driver--hvn65-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce22ba38-b4f8-4031-88e9-0196a2ef8f62", ResourceVersion:"1255", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 20, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.133.95", ContainerID:"802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02", Pod:"csi-node-driver-hvn65", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6c47fd70776", MAC:"1e:eb:79:f8:96:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:21:11.307523 containerd[1477]: 2025-02-13 20:21:11.299 [INFO][3006] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02" Namespace="calico-system" Pod="csi-node-driver-hvn65" WorkloadEndpoint="64.23.133.95-k8s-csi--node--driver--hvn65-eth0" Feb 13 20:21:11.351631 systemd[1]: Started cri-containerd-25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe.scope - libcontainer container 25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe. Feb 13 20:21:11.412509 kubelet[1780]: E0213 20:21:11.411612 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:21:11.499766 containerd[1477]: time="2025-02-13T20:21:11.499292183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:21:11.499766 containerd[1477]: time="2025-02-13T20:21:11.499396453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:21:11.499766 containerd[1477]: time="2025-02-13T20:21:11.499415715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:11.499766 containerd[1477]: time="2025-02-13T20:21:11.499560217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:11.565219 systemd[1]: Started cri-containerd-802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02.scope - libcontainer container 802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02. Feb 13 20:21:11.642267 containerd[1477]: time="2025-02-13T20:21:11.642164394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d558c6c6c-njt6l,Uid:8386e5da-6e1b-4bc1-b820-a5872769500e,Namespace:calico-system,Attempt:1,} returns sandbox id \"25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe\"" Feb 13 20:21:11.647787 containerd[1477]: time="2025-02-13T20:21:11.647608386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 20:21:11.721891 containerd[1477]: time="2025-02-13T20:21:11.720732675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hvn65,Uid:ce22ba38-b4f8-4031-88e9-0196a2ef8f62,Namespace:calico-system,Attempt:1,} returns sandbox id \"802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02\"" Feb 13 20:21:11.857729 kubelet[1780]: E0213 20:21:11.857665 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:12.682117 systemd-networkd[1377]: cali17638900755: Gained IPv6LL Feb 13 20:21:12.720244 systemd[1]: cri-containerd-79ad083d4d637b86759ef465c977deaa5b400135318f318e1877ed78630805fe.scope: Deactivated successfully. Feb 13 20:21:12.720550 systemd[1]: cri-containerd-79ad083d4d637b86759ef465c977deaa5b400135318f318e1877ed78630805fe.scope: Consumed 1.159s CPU time. Feb 13 20:21:12.809154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79ad083d4d637b86759ef465c977deaa5b400135318f318e1877ed78630805fe-rootfs.mount: Deactivated successfully. Feb 13 20:21:12.818934 containerd[1477]: time="2025-02-13T20:21:12.812990411Z" level=info msg="shim disconnected" id=79ad083d4d637b86759ef465c977deaa5b400135318f318e1877ed78630805fe namespace=k8s.io Feb 13 20:21:12.820344 containerd[1477]: time="2025-02-13T20:21:12.819697404Z" level=warning msg="cleaning up after shim disconnected" id=79ad083d4d637b86759ef465c977deaa5b400135318f318e1877ed78630805fe namespace=k8s.io Feb 13 20:21:12.820747 containerd[1477]: time="2025-02-13T20:21:12.820707057Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:21:12.858359 kubelet[1780]: E0213 20:21:12.858274 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:13.035250 containerd[1477]: time="2025-02-13T20:21:13.034603062Z" level=info msg="StopPodSandbox for \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\"" Feb 13 20:21:13.251921 systemd-networkd[1377]: cali6c47fd70776: Gained IPv6LL Feb 13 20:21:13.379915 containerd[1477]: 2025-02-13 20:21:13.225 [INFO][3210] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Feb 13 20:21:13.379915 containerd[1477]: 2025-02-13 20:21:13.226 [INFO][3210] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" iface="eth0" netns="/var/run/netns/cni-8d6878b1-a085-6412-a512-67871b9e2c4f" Feb 13 20:21:13.379915 containerd[1477]: 2025-02-13 20:21:13.236 [INFO][3210] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" iface="eth0" netns="/var/run/netns/cni-8d6878b1-a085-6412-a512-67871b9e2c4f" Feb 13 20:21:13.379915 containerd[1477]: 2025-02-13 20:21:13.237 [INFO][3210] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" iface="eth0" netns="/var/run/netns/cni-8d6878b1-a085-6412-a512-67871b9e2c4f" Feb 13 20:21:13.379915 containerd[1477]: 2025-02-13 20:21:13.237 [INFO][3210] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Feb 13 20:21:13.379915 containerd[1477]: 2025-02-13 20:21:13.238 [INFO][3210] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Feb 13 20:21:13.379915 containerd[1477]: 2025-02-13 20:21:13.329 [INFO][3216] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" HandleID="k8s-pod-network.79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Workload="64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0" Feb 13 20:21:13.379915 containerd[1477]: 2025-02-13 20:21:13.329 [INFO][3216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:21:13.379915 containerd[1477]: 2025-02-13 20:21:13.329 [INFO][3216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:21:13.379915 containerd[1477]: 2025-02-13 20:21:13.364 [WARNING][3216] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" HandleID="k8s-pod-network.79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Workload="64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0" Feb 13 20:21:13.379915 containerd[1477]: 2025-02-13 20:21:13.364 [INFO][3216] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" HandleID="k8s-pod-network.79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Workload="64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0" Feb 13 20:21:13.379915 containerd[1477]: 2025-02-13 20:21:13.374 [INFO][3216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:21:13.379915 containerd[1477]: 2025-02-13 20:21:13.376 [INFO][3210] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Feb 13 20:21:13.385915 containerd[1477]: time="2025-02-13T20:21:13.383282074Z" level=info msg="TearDown network for sandbox \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\" successfully" Feb 13 20:21:13.385915 containerd[1477]: time="2025-02-13T20:21:13.383351345Z" level=info msg="StopPodSandbox for \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\" returns successfully" Feb 13 20:21:13.387027 systemd[1]: run-netns-cni\x2d8d6878b1\x2da085\x2d6412\x2da512\x2d67871b9e2c4f.mount: Deactivated successfully. Feb 13 20:21:13.388671 containerd[1477]: time="2025-02-13T20:21:13.385869369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-8289b,Uid:d11c2fc0-063e-4017-ba54-3c29f7590e21,Namespace:default,Attempt:1,}" Feb 13 20:21:13.427441 sshd[3065]: Invalid user nutanix from 194.0.234.37 port 41946 Feb 13 20:21:13.452644 kubelet[1780]: E0213 20:21:13.451603 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:21:13.502259 containerd[1477]: time="2025-02-13T20:21:13.502154415Z" level=info msg="CreateContainer within sandbox \"4e3330fc583a56ce5dcfaa408bc6e28bef410cb25ae93d0274c9c34927ba67dc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:21:13.563922 containerd[1477]: time="2025-02-13T20:21:13.563799750Z" level=info msg="CreateContainer within sandbox \"4e3330fc583a56ce5dcfaa408bc6e28bef410cb25ae93d0274c9c34927ba67dc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c86c3680c579353b4a1cea86e5cca9dd6960c4ad349a76e979d94ce686eb62b8\"" Feb 13 20:21:13.565934 containerd[1477]: time="2025-02-13T20:21:13.565157503Z" level=info msg="StartContainer for \"c86c3680c579353b4a1cea86e5cca9dd6960c4ad349a76e979d94ce686eb62b8\"" Feb 13 20:21:13.641085 sshd[3065]: Connection closed by invalid user nutanix 194.0.234.37 port 41946 [preauth] Feb 13 20:21:13.645639 systemd[1]: sshd@7-64.23.133.95:22-194.0.234.37:41946.service: Deactivated successfully. Feb 13 20:21:13.741550 systemd[1]: Started cri-containerd-c86c3680c579353b4a1cea86e5cca9dd6960c4ad349a76e979d94ce686eb62b8.scope - libcontainer container c86c3680c579353b4a1cea86e5cca9dd6960c4ad349a76e979d94ce686eb62b8. Feb 13 20:21:13.870345 kubelet[1780]: E0213 20:21:13.859122 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:13.897088 containerd[1477]: time="2025-02-13T20:21:13.896926267Z" level=info msg="StartContainer for \"c86c3680c579353b4a1cea86e5cca9dd6960c4ad349a76e979d94ce686eb62b8\" returns successfully" Feb 13 20:21:13.919377 systemd-networkd[1377]: calid3986c6d02c: Link UP Feb 13 20:21:13.922982 systemd-networkd[1377]: calid3986c6d02c: Gained carrier Feb 13 20:21:13.961236 containerd[1477]: 2025-02-13 20:21:13.544 [INFO][3222] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:21:13.961236 containerd[1477]: 2025-02-13 20:21:13.580 [INFO][3222] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0 nginx-deployment-85f456d6dd- default d11c2fc0-063e-4017-ba54-3c29f7590e21 1279 0 2025-02-13 20:20:59 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 64.23.133.95 nginx-deployment-85f456d6dd-8289b eth0 default [] [] [kns.default ksa.default.default] calid3986c6d02c [] []}} ContainerID="101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303" Namespace="default" Pod="nginx-deployment-85f456d6dd-8289b" WorkloadEndpoint="64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-" Feb 13 20:21:13.961236 containerd[1477]: 2025-02-13 20:21:13.580 [INFO][3222] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303" Namespace="default" Pod="nginx-deployment-85f456d6dd-8289b" WorkloadEndpoint="64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0" Feb 13 20:21:13.961236 containerd[1477]: 2025-02-13 20:21:13.691 [INFO][3242] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303" HandleID="k8s-pod-network.101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303" Workload="64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0" Feb 13 20:21:13.961236 containerd[1477]: 2025-02-13 20:21:13.775 [INFO][3242] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303" HandleID="k8s-pod-network.101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303" Workload="64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051470), Attrs:map[string]string{"namespace":"default", "node":"64.23.133.95", "pod":"nginx-deployment-85f456d6dd-8289b", "timestamp":"2025-02-13 20:21:13.690433949 +0000 UTC"}, Hostname:"64.23.133.95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:21:13.961236 containerd[1477]: 2025-02-13 20:21:13.775 [INFO][3242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:21:13.961236 containerd[1477]: 2025-02-13 20:21:13.776 [INFO][3242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:21:13.961236 containerd[1477]: 2025-02-13 20:21:13.776 [INFO][3242] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '64.23.133.95' Feb 13 20:21:13.961236 containerd[1477]: 2025-02-13 20:21:13.783 [INFO][3242] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303" host="64.23.133.95" Feb 13 20:21:13.961236 containerd[1477]: 2025-02-13 20:21:13.797 [INFO][3242] ipam/ipam.go 372: Looking up existing affinities for host host="64.23.133.95" Feb 13 20:21:13.961236 containerd[1477]: 2025-02-13 20:21:13.830 [INFO][3242] ipam/ipam.go 489: Trying affinity for 192.168.103.0/26 host="64.23.133.95" Feb 13 20:21:13.961236 containerd[1477]: 2025-02-13 20:21:13.848 [INFO][3242] ipam/ipam.go 155: Attempting to load block cidr=192.168.103.0/26 host="64.23.133.95" Feb 13 20:21:13.961236 containerd[1477]: 2025-02-13 20:21:13.862 [INFO][3242] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="64.23.133.95" Feb 13 20:21:13.961236 containerd[1477]: 2025-02-13 20:21:13.862 [INFO][3242] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303" host="64.23.133.95" Feb 13 20:21:13.961236 containerd[1477]: 2025-02-13 20:21:13.871 [INFO][3242] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303 Feb 13 20:21:13.961236 containerd[1477]: 2025-02-13 20:21:13.882 [INFO][3242] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303" host="64.23.133.95" Feb 13 20:21:13.961236 containerd[1477]: 2025-02-13 20:21:13.899 [INFO][3242] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.103.3/26] block=192.168.103.0/26 handle="k8s-pod-network.101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303" host="64.23.133.95" Feb 13 20:21:13.961236 containerd[1477]: 2025-02-13 20:21:13.899 [INFO][3242] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.103.3/26] handle="k8s-pod-network.101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303" host="64.23.133.95" Feb 13 20:21:13.961236 containerd[1477]: 2025-02-13 20:21:13.900 [INFO][3242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:21:13.961236 containerd[1477]: 2025-02-13 20:21:13.900 [INFO][3242] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.3/26] IPv6=[] ContainerID="101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303" HandleID="k8s-pod-network.101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303" Workload="64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0" Feb 13 20:21:13.962672 containerd[1477]: 2025-02-13 20:21:13.908 [INFO][3222] cni-plugin/k8s.go 386: Populated endpoint ContainerID="101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303" Namespace="default" Pod="nginx-deployment-85f456d6dd-8289b" WorkloadEndpoint="64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"d11c2fc0-063e-4017-ba54-3c29f7590e21", ResourceVersion:"1279", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 20, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.133.95", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-8289b", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calid3986c6d02c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:21:13.962672 containerd[1477]: 2025-02-13 20:21:13.908 [INFO][3222] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.103.3/32] ContainerID="101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303" Namespace="default" Pod="nginx-deployment-85f456d6dd-8289b" WorkloadEndpoint="64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0" Feb 13 20:21:13.962672 containerd[1477]: 2025-02-13 20:21:13.908 [INFO][3222] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3986c6d02c ContainerID="101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303" Namespace="default" Pod="nginx-deployment-85f456d6dd-8289b" WorkloadEndpoint="64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0" Feb 13 20:21:13.962672 containerd[1477]: 2025-02-13 20:21:13.925 [INFO][3222] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303" Namespace="default" Pod="nginx-deployment-85f456d6dd-8289b" WorkloadEndpoint="64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0" Feb 13 20:21:13.962672 containerd[1477]: 2025-02-13 20:21:13.926 [INFO][3222] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303" Namespace="default" Pod="nginx-deployment-85f456d6dd-8289b" WorkloadEndpoint="64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"d11c2fc0-063e-4017-ba54-3c29f7590e21", ResourceVersion:"1279", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 20, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.133.95", ContainerID:"101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303", Pod:"nginx-deployment-85f456d6dd-8289b", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calid3986c6d02c", MAC:"ce:a2:be:12:fd:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:21:13.962672 containerd[1477]: 2025-02-13 20:21:13.943 [INFO][3222] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303" Namespace="default" Pod="nginx-deployment-85f456d6dd-8289b" WorkloadEndpoint="64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0" Feb 13 20:21:14.129890 containerd[1477]: time="2025-02-13T20:21:14.126043152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:21:14.129890 containerd[1477]: time="2025-02-13T20:21:14.126663102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:21:14.129890 containerd[1477]: time="2025-02-13T20:21:14.126731955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:14.132916 containerd[1477]: time="2025-02-13T20:21:14.132057878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:14.268220 systemd[1]: Started cri-containerd-101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303.scope - libcontainer container 101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303. Feb 13 20:21:14.442897 containerd[1477]: time="2025-02-13T20:21:14.440908456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-8289b,Uid:d11c2fc0-063e-4017-ba54-3c29f7590e21,Namespace:default,Attempt:1,} returns sandbox id \"101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303\"" Feb 13 20:21:14.472940 kubelet[1780]: E0213 20:21:14.472543 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:21:14.546203 systemd[1]: run-containerd-runc-k8s.io-c86c3680c579353b4a1cea86e5cca9dd6960c4ad349a76e979d94ce686eb62b8-runc.i24VEu.mount: Deactivated successfully. Feb 13 20:21:14.550364 kubelet[1780]: I0213 20:21:14.549061 1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qg9c4" podStartSLOduration=6.549029448 podStartE2EDuration="6.549029448s" podCreationTimestamp="2025-02-13 20:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:21:14.548559529 +0000 UTC m=+39.946018064" watchObservedRunningTime="2025-02-13 20:21:14.549029448 +0000 UTC m=+39.946487976" Feb 13 20:21:14.861338 kubelet[1780]: E0213 20:21:14.861203 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:15.425330 systemd-networkd[1377]: calid3986c6d02c: Gained IPv6LL Feb 13 20:21:15.485912 kubelet[1780]: E0213 20:21:15.483705 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:21:15.823067 kubelet[1780]: E0213 20:21:15.822995 1780 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:15.861568 kubelet[1780]: E0213 20:21:15.861446 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:16.075592 containerd[1477]: time="2025-02-13T20:21:16.074414324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:16.078901 containerd[1477]: time="2025-02-13T20:21:16.078619888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 20:21:16.080237 containerd[1477]: time="2025-02-13T20:21:16.080091898Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:16.086553 containerd[1477]: time="2025-02-13T20:21:16.086484861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:16.088214 containerd[1477]: time="2025-02-13T20:21:16.087995859Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.438802119s" Feb 13 20:21:16.088611 containerd[1477]: time="2025-02-13T20:21:16.088564535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 20:21:16.103182 containerd[1477]: time="2025-02-13T20:21:16.102213571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:21:16.195565 containerd[1477]: time="2025-02-13T20:21:16.195475117Z" level=info msg="CreateContainer within sandbox \"25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 20:21:16.366206 containerd[1477]: time="2025-02-13T20:21:16.364601491Z" level=info msg="CreateContainer within sandbox \"25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"19d5789cfe81226a58f4adfb8955d36496a00dcb61de44d7070f1c7ce4b0faff\"" Feb 13 20:21:16.368914 containerd[1477]: time="2025-02-13T20:21:16.366699543Z" level=info msg="StartContainer for \"19d5789cfe81226a58f4adfb8955d36496a00dcb61de44d7070f1c7ce4b0faff\"" Feb 13 20:21:16.378936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3591896393.mount: Deactivated successfully. Feb 13 20:21:16.492157 systemd[1]: Started cri-containerd-19d5789cfe81226a58f4adfb8955d36496a00dcb61de44d7070f1c7ce4b0faff.scope - libcontainer container 19d5789cfe81226a58f4adfb8955d36496a00dcb61de44d7070f1c7ce4b0faff. Feb 13 20:21:16.729116 containerd[1477]: time="2025-02-13T20:21:16.724600271Z" level=info msg="StartContainer for \"19d5789cfe81226a58f4adfb8955d36496a00dcb61de44d7070f1c7ce4b0faff\" returns successfully" Feb 13 20:21:16.865579 kubelet[1780]: E0213 20:21:16.864881 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:17.865187 kubelet[1780]: E0213 20:21:17.865093 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:18.148729 containerd[1477]: time="2025-02-13T20:21:18.147954908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:18.150789 containerd[1477]: time="2025-02-13T20:21:18.150026310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 20:21:18.152200 containerd[1477]: time="2025-02-13T20:21:18.151763924Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:18.155498 containerd[1477]: time="2025-02-13T20:21:18.155352955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:18.157383 containerd[1477]: time="2025-02-13T20:21:18.157301702Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.055019325s" Feb 13 20:21:18.157383 containerd[1477]: time="2025-02-13T20:21:18.157360446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 20:21:18.159872 containerd[1477]: time="2025-02-13T20:21:18.159453969Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 20:21:18.162168 containerd[1477]: time="2025-02-13T20:21:18.162107935Z" level=info msg="CreateContainer within sandbox \"802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:21:18.208413 containerd[1477]: time="2025-02-13T20:21:18.208321291Z" level=info msg="CreateContainer within sandbox \"802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c4d8be2675772ff2f8e420dead4a7b547fb38d589619d0147bd7dce1ed49f72b\"" Feb 13 20:21:18.212627 containerd[1477]: time="2025-02-13T20:21:18.212560469Z" level=info msg="StartContainer for \"c4d8be2675772ff2f8e420dead4a7b547fb38d589619d0147bd7dce1ed49f72b\"" Feb 13 20:21:18.298392 systemd[1]: Started cri-containerd-c4d8be2675772ff2f8e420dead4a7b547fb38d589619d0147bd7dce1ed49f72b.scope - libcontainer container c4d8be2675772ff2f8e420dead4a7b547fb38d589619d0147bd7dce1ed49f72b. Feb 13 20:21:18.369362 containerd[1477]: time="2025-02-13T20:21:18.368425842Z" level=info msg="StartContainer for \"c4d8be2675772ff2f8e420dead4a7b547fb38d589619d0147bd7dce1ed49f72b\" returns successfully" Feb 13 20:21:18.866008 kubelet[1780]: E0213 20:21:18.865918 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:19.866660 kubelet[1780]: E0213 20:21:19.866595 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:20.868035 kubelet[1780]: E0213 20:21:20.867793 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:21.869200 kubelet[1780]: E0213 20:21:21.869133 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:22.442487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3401870180.mount: Deactivated successfully. Feb 13 20:21:22.871279 kubelet[1780]: E0213 20:21:22.871213 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:23.873641 kubelet[1780]: E0213 20:21:23.873578 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:24.873933 kubelet[1780]: E0213 20:21:24.873800 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:24.956052 kubelet[1780]: I0213 20:21:24.953939 1780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:21:24.957725 kubelet[1780]: E0213 20:21:24.957018 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:21:24.993927 kubelet[1780]: I0213 20:21:24.990889 1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d558c6c6c-njt6l" podStartSLOduration=23.536354936 podStartE2EDuration="27.990818734s" podCreationTimestamp="2025-02-13 20:20:57 +0000 UTC" firstStartedPulling="2025-02-13 20:21:11.644999127 +0000 UTC m=+37.042457642" lastFinishedPulling="2025-02-13 20:21:16.099462921 +0000 UTC m=+41.496921440" observedRunningTime="2025-02-13 20:21:17.54324361 +0000 UTC m=+42.940702144" watchObservedRunningTime="2025-02-13 20:21:24.990818734 +0000 UTC m=+50.388277253" Feb 13 20:21:25.103103 containerd[1477]: time="2025-02-13T20:21:25.103018282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:25.106250 containerd[1477]: time="2025-02-13T20:21:25.106135223Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 20:21:25.109931 containerd[1477]: time="2025-02-13T20:21:25.109279893Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:25.115768 containerd[1477]: time="2025-02-13T20:21:25.115693020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:25.118787 containerd[1477]: time="2025-02-13T20:21:25.118701208Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 6.959188411s" Feb 13 20:21:25.119039 containerd[1477]: time="2025-02-13T20:21:25.119012455Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 20:21:25.125217 containerd[1477]: time="2025-02-13T20:21:25.124285784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:21:25.163604 containerd[1477]: time="2025-02-13T20:21:25.163499572Z" level=info msg="CreateContainer within sandbox \"101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 20:21:25.200689 containerd[1477]: time="2025-02-13T20:21:25.200431467Z" level=info msg="CreateContainer within sandbox \"101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"be12e8591bdd34326df5300346dcd306be2cc1ff1f4428c4b97a2bed9019a5c0\"" Feb 13 20:21:25.203188 containerd[1477]: time="2025-02-13T20:21:25.202976657Z" level=info msg="StartContainer for \"be12e8591bdd34326df5300346dcd306be2cc1ff1f4428c4b97a2bed9019a5c0\"" Feb 13 20:21:25.205769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2725586967.mount: Deactivated successfully. Feb 13 20:21:25.319518 systemd[1]: Started cri-containerd-be12e8591bdd34326df5300346dcd306be2cc1ff1f4428c4b97a2bed9019a5c0.scope - libcontainer container be12e8591bdd34326df5300346dcd306be2cc1ff1f4428c4b97a2bed9019a5c0. Feb 13 20:21:25.407755 containerd[1477]: time="2025-02-13T20:21:25.406550345Z" level=info msg="StartContainer for \"be12e8591bdd34326df5300346dcd306be2cc1ff1f4428c4b97a2bed9019a5c0\" returns successfully" Feb 13 20:21:25.591126 kubelet[1780]: E0213 20:21:25.589988 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:21:25.674133 kernel: bpftool[3910]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:21:25.884986 kubelet[1780]: E0213 20:21:25.875991 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:26.118177 systemd-networkd[1377]: vxlan.calico: Link UP Feb 13 20:21:26.118190 systemd-networkd[1377]: vxlan.calico: Gained carrier Feb 13 20:21:26.885413 kubelet[1780]: E0213 20:21:26.885354 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:27.176001 containerd[1477]: time="2025-02-13T20:21:27.175785482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:27.180891 containerd[1477]: time="2025-02-13T20:21:27.180532660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 20:21:27.192890 containerd[1477]: time="2025-02-13T20:21:27.192082679Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:27.197573 containerd[1477]: time="2025-02-13T20:21:27.197501829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:27.200192 containerd[1477]: time="2025-02-13T20:21:27.199240645Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.073840153s" Feb 13 20:21:27.200192 containerd[1477]: time="2025-02-13T20:21:27.199287788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 20:21:27.203206 containerd[1477]: time="2025-02-13T20:21:27.203134160Z" level=info msg="CreateContainer within sandbox \"802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:21:27.236252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1478420828.mount: Deactivated successfully. Feb 13 20:21:27.266493 containerd[1477]: time="2025-02-13T20:21:27.266288447Z" level=info msg="CreateContainer within sandbox \"802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"57c37891e45cd7f19b1e5f4c23e41f8eca7a41757c20e3c960315ecf2dc73c2c\"" Feb 13 20:21:27.267546 containerd[1477]: time="2025-02-13T20:21:27.267499944Z" level=info msg="StartContainer for \"57c37891e45cd7f19b1e5f4c23e41f8eca7a41757c20e3c960315ecf2dc73c2c\"" Feb 13 20:21:27.330231 systemd[1]: Started cri-containerd-57c37891e45cd7f19b1e5f4c23e41f8eca7a41757c20e3c960315ecf2dc73c2c.scope - libcontainer container 57c37891e45cd7f19b1e5f4c23e41f8eca7a41757c20e3c960315ecf2dc73c2c. Feb 13 20:21:27.379373 containerd[1477]: time="2025-02-13T20:21:27.379193149Z" level=info msg="StartContainer for \"57c37891e45cd7f19b1e5f4c23e41f8eca7a41757c20e3c960315ecf2dc73c2c\" returns successfully" Feb 13 20:21:27.521296 systemd-networkd[1377]: vxlan.calico: Gained IPv6LL Feb 13 20:21:27.644220 kubelet[1780]: I0213 20:21:27.643934 1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-8289b" podStartSLOduration=17.966005304 podStartE2EDuration="28.64390912s" podCreationTimestamp="2025-02-13 20:20:59 +0000 UTC" firstStartedPulling="2025-02-13 20:21:14.445428954 +0000 UTC m=+39.842887466" lastFinishedPulling="2025-02-13 20:21:25.123332768 +0000 UTC m=+50.520791282" observedRunningTime="2025-02-13 20:21:25.621151076 +0000 UTC m=+51.018609600" watchObservedRunningTime="2025-02-13 20:21:27.64390912 +0000 UTC m=+53.041367654" Feb 13 20:21:27.644220 kubelet[1780]: I0213 20:21:27.644213 1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hvn65" podStartSLOduration=36.170878156 podStartE2EDuration="51.644170877s" podCreationTimestamp="2025-02-13 20:20:36 +0000 UTC" firstStartedPulling="2025-02-13 20:21:11.727118337 +0000 UTC m=+37.124576855" lastFinishedPulling="2025-02-13 20:21:27.200411052 +0000 UTC m=+52.597869576" observedRunningTime="2025-02-13 20:21:27.643898524 +0000 UTC m=+53.041357061" watchObservedRunningTime="2025-02-13 20:21:27.644170877 +0000 UTC m=+53.041629408" Feb 13 20:21:27.886954 kubelet[1780]: E0213 20:21:27.886808 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:28.216501 kubelet[1780]: I0213 20:21:28.216301 1780 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:21:28.216501 kubelet[1780]: I0213 20:21:28.216384 1780 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:21:28.887784 kubelet[1780]: E0213 20:21:28.887693 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:29.526795 kubelet[1780]: I0213 20:21:29.526718 1780 topology_manager.go:215] "Topology Admit Handler" podUID="68d0800e-95f4-43b2-a265-f96c6bb40611" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 20:21:29.540389 systemd[1]: Created slice kubepods-besteffort-pod68d0800e_95f4_43b2_a265_f96c6bb40611.slice - libcontainer container kubepods-besteffort-pod68d0800e_95f4_43b2_a265_f96c6bb40611.slice. Feb 13 20:21:29.581131 kubelet[1780]: I0213 20:21:29.581063 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/68d0800e-95f4-43b2-a265-f96c6bb40611-data\") pod \"nfs-server-provisioner-0\" (UID: \"68d0800e-95f4-43b2-a265-f96c6bb40611\") " pod="default/nfs-server-provisioner-0" Feb 13 20:21:29.581429 kubelet[1780]: I0213 20:21:29.581218 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6flg\" (UniqueName: \"kubernetes.io/projected/68d0800e-95f4-43b2-a265-f96c6bb40611-kube-api-access-d6flg\") pod \"nfs-server-provisioner-0\" (UID: \"68d0800e-95f4-43b2-a265-f96c6bb40611\") " pod="default/nfs-server-provisioner-0" Feb 13 20:21:29.845167 containerd[1477]: time="2025-02-13T20:21:29.845095977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:68d0800e-95f4-43b2-a265-f96c6bb40611,Namespace:default,Attempt:0,}" Feb 13 20:21:29.888342 kubelet[1780]: E0213 20:21:29.888223 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:30.192037 systemd-networkd[1377]: cali60e51b789ff: Link UP Feb 13 20:21:30.193632 systemd-networkd[1377]: cali60e51b789ff: Gained carrier Feb 13 20:21:30.279381 containerd[1477]: 2025-02-13 20:21:29.974 [INFO][4038] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {64.23.133.95-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 68d0800e-95f4-43b2-a265-f96c6bb40611 1387 0 2025-02-13 20:21:29 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 64.23.133.95 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="64.23.133.95-k8s-nfs--server--provisioner--0-" Feb 13 20:21:30.279381 containerd[1477]: 2025-02-13 20:21:29.976 [INFO][4038] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="64.23.133.95-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:21:30.279381 containerd[1477]: 2025-02-13 20:21:30.034 [INFO][4046] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f" HandleID="k8s-pod-network.7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f" Workload="64.23.133.95-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:21:30.279381 containerd[1477]: 2025-02-13 20:21:30.059 [INFO][4046] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f" HandleID="k8s-pod-network.7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f" Workload="64.23.133.95-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291170), Attrs:map[string]string{"namespace":"default", "node":"64.23.133.95", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 20:21:30.034431203 +0000 UTC"}, Hostname:"64.23.133.95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:21:30.279381 containerd[1477]: 2025-02-13 20:21:30.059 [INFO][4046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:21:30.279381 containerd[1477]: 2025-02-13 20:21:30.059 [INFO][4046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:21:30.279381 containerd[1477]: 2025-02-13 20:21:30.059 [INFO][4046] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '64.23.133.95' Feb 13 20:21:30.279381 containerd[1477]: 2025-02-13 20:21:30.065 [INFO][4046] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f" host="64.23.133.95" Feb 13 20:21:30.279381 containerd[1477]: 2025-02-13 20:21:30.074 [INFO][4046] ipam/ipam.go 372: Looking up existing affinities for host host="64.23.133.95" Feb 13 20:21:30.279381 containerd[1477]: 2025-02-13 20:21:30.093 [INFO][4046] ipam/ipam.go 489: Trying affinity for 192.168.103.0/26 host="64.23.133.95" Feb 13 20:21:30.279381 containerd[1477]: 2025-02-13 20:21:30.100 [INFO][4046] ipam/ipam.go 155: Attempting to load block cidr=192.168.103.0/26 host="64.23.133.95" Feb 13 20:21:30.279381 containerd[1477]: 2025-02-13 20:21:30.107 [INFO][4046] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="64.23.133.95" Feb 13 20:21:30.279381 containerd[1477]: 2025-02-13 20:21:30.107 [INFO][4046] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f" host="64.23.133.95" Feb 13 20:21:30.279381 containerd[1477]: 2025-02-13 20:21:30.116 [INFO][4046] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f Feb 13 20:21:30.279381 containerd[1477]: 2025-02-13 20:21:30.129 [INFO][4046] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f" host="64.23.133.95" Feb 13 20:21:30.279381 containerd[1477]: 2025-02-13 20:21:30.179 [INFO][4046] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.103.4/26] block=192.168.103.0/26 handle="k8s-pod-network.7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f" host="64.23.133.95" Feb 13 20:21:30.279381 containerd[1477]: 2025-02-13 20:21:30.179 [INFO][4046] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.103.4/26] handle="k8s-pod-network.7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f" host="64.23.133.95" Feb 13 20:21:30.279381 containerd[1477]: 2025-02-13 20:21:30.179 [INFO][4046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:21:30.279381 containerd[1477]: 2025-02-13 20:21:30.179 [INFO][4046] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.4/26] IPv6=[] ContainerID="7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f" HandleID="k8s-pod-network.7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f" Workload="64.23.133.95-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:21:30.281622 containerd[1477]: 2025-02-13 20:21:30.184 [INFO][4038] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="64.23.133.95-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.133.95-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"68d0800e-95f4-43b2-a265-f96c6bb40611", ResourceVersion:"1387", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.133.95", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.103.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:21:30.281622 containerd[1477]: 2025-02-13 20:21:30.185 [INFO][4038] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.103.4/32] ContainerID="7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="64.23.133.95-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:21:30.281622 containerd[1477]: 2025-02-13 20:21:30.185 [INFO][4038] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="64.23.133.95-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:21:30.281622 containerd[1477]: 2025-02-13 20:21:30.192 [INFO][4038] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="64.23.133.95-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:21:30.284145 containerd[1477]: 2025-02-13 20:21:30.194 [INFO][4038] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="64.23.133.95-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.133.95-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"68d0800e-95f4-43b2-a265-f96c6bb40611", ResourceVersion:"1387", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.133.95", ContainerID:"7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.103.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"e2:ff:f9:8c:61:d5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:21:30.284145 containerd[1477]: 2025-02-13 20:21:30.274 [INFO][4038] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="64.23.133.95-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:21:30.360217 containerd[1477]: time="2025-02-13T20:21:30.359779366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:21:30.360217 containerd[1477]: time="2025-02-13T20:21:30.360023677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:21:30.360505 containerd[1477]: time="2025-02-13T20:21:30.360050101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:30.361527 containerd[1477]: time="2025-02-13T20:21:30.361418477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:30.401243 systemd[1]: Started cri-containerd-7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f.scope - libcontainer container 7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f. Feb 13 20:21:30.468287 containerd[1477]: time="2025-02-13T20:21:30.467784808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:68d0800e-95f4-43b2-a265-f96c6bb40611,Namespace:default,Attempt:0,} returns sandbox id \"7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f\"" Feb 13 20:21:30.473499 containerd[1477]: time="2025-02-13T20:21:30.473141391Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 20:21:30.889306 kubelet[1780]: E0213 20:21:30.889221 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:31.893193 kubelet[1780]: E0213 20:21:31.893099 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:32.007368 systemd-networkd[1377]: cali60e51b789ff: Gained IPv6LL Feb 13 20:21:32.896003 kubelet[1780]: E0213 20:21:32.895709 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:33.898431 kubelet[1780]: E0213 20:21:33.898229 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:34.899505 kubelet[1780]: E0213 20:21:34.899416 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:35.826254 kubelet[1780]: E0213 20:21:35.823153 1780 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:35.913525 kubelet[1780]: E0213 20:21:35.913246 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:36.036644 containerd[1477]: time="2025-02-13T20:21:36.029488626Z" level=info msg="StopPodSandbox for \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\"" Feb 13 20:21:36.538961 containerd[1477]: 2025-02-13 20:21:36.292 [WARNING][4150] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"d11c2fc0-063e-4017-ba54-3c29f7590e21", ResourceVersion:"1352", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 20, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.133.95", ContainerID:"101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303", Pod:"nginx-deployment-85f456d6dd-8289b", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calid3986c6d02c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:21:36.538961 containerd[1477]: 2025-02-13 20:21:36.295 [INFO][4150] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Feb 13 20:21:36.538961 containerd[1477]: 2025-02-13 20:21:36.295 [INFO][4150] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" iface="eth0" netns="" Feb 13 20:21:36.538961 containerd[1477]: 2025-02-13 20:21:36.295 [INFO][4150] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Feb 13 20:21:36.538961 containerd[1477]: 2025-02-13 20:21:36.295 [INFO][4150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Feb 13 20:21:36.538961 containerd[1477]: 2025-02-13 20:21:36.461 [INFO][4156] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" HandleID="k8s-pod-network.79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Workload="64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0" Feb 13 20:21:36.538961 containerd[1477]: 2025-02-13 20:21:36.461 [INFO][4156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:21:36.538961 containerd[1477]: 2025-02-13 20:21:36.461 [INFO][4156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:21:36.538961 containerd[1477]: 2025-02-13 20:21:36.495 [WARNING][4156] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" HandleID="k8s-pod-network.79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Workload="64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0" Feb 13 20:21:36.538961 containerd[1477]: 2025-02-13 20:21:36.495 [INFO][4156] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" HandleID="k8s-pod-network.79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Workload="64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0" Feb 13 20:21:36.538961 containerd[1477]: 2025-02-13 20:21:36.503 [INFO][4156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:21:36.538961 containerd[1477]: 2025-02-13 20:21:36.514 [INFO][4150] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Feb 13 20:21:36.540032 containerd[1477]: time="2025-02-13T20:21:36.539978938Z" level=info msg="TearDown network for sandbox \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\" successfully" Feb 13 20:21:36.540362 containerd[1477]: time="2025-02-13T20:21:36.540337878Z" level=info msg="StopPodSandbox for \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\" returns successfully" Feb 13 20:21:36.575910 containerd[1477]: time="2025-02-13T20:21:36.575805931Z" level=info msg="RemovePodSandbox for \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\"" Feb 13 20:21:36.579309 containerd[1477]: time="2025-02-13T20:21:36.576988635Z" level=info msg="Forcibly stopping sandbox \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\"" Feb 13 20:21:36.916464 kubelet[1780]: E0213 20:21:36.916382 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:37.064786 containerd[1477]: 2025-02-13 20:21:36.830 [WARNING][4174] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"d11c2fc0-063e-4017-ba54-3c29f7590e21", ResourceVersion:"1352", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 20, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.133.95", ContainerID:"101877d49d9c89f8584112808a7fe8a717a3905bf8992722b489f6f410c70303", Pod:"nginx-deployment-85f456d6dd-8289b", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calid3986c6d02c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:21:37.064786 containerd[1477]: 2025-02-13 20:21:36.830 [INFO][4174] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Feb 13 20:21:37.064786 containerd[1477]: 2025-02-13 20:21:36.830 [INFO][4174] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" iface="eth0" netns="" Feb 13 20:21:37.064786 containerd[1477]: 2025-02-13 20:21:36.830 [INFO][4174] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Feb 13 20:21:37.064786 containerd[1477]: 2025-02-13 20:21:36.830 [INFO][4174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Feb 13 20:21:37.064786 containerd[1477]: 2025-02-13 20:21:37.001 [INFO][4186] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" HandleID="k8s-pod-network.79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Workload="64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0" Feb 13 20:21:37.064786 containerd[1477]: 2025-02-13 20:21:37.002 [INFO][4186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:21:37.064786 containerd[1477]: 2025-02-13 20:21:37.002 [INFO][4186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:21:37.064786 containerd[1477]: 2025-02-13 20:21:37.044 [WARNING][4186] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" HandleID="k8s-pod-network.79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Workload="64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0" Feb 13 20:21:37.064786 containerd[1477]: 2025-02-13 20:21:37.044 [INFO][4186] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" HandleID="k8s-pod-network.79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Workload="64.23.133.95-k8s-nginx--deployment--85f456d6dd--8289b-eth0" Feb 13 20:21:37.064786 containerd[1477]: 2025-02-13 20:21:37.048 [INFO][4186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:21:37.064786 containerd[1477]: 2025-02-13 20:21:37.052 [INFO][4174] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3" Feb 13 20:21:37.064786 containerd[1477]: time="2025-02-13T20:21:37.062509410Z" level=info msg="TearDown network for sandbox \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\" successfully" Feb 13 20:21:37.117862 containerd[1477]: time="2025-02-13T20:21:37.117678219Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:21:37.118192 containerd[1477]: time="2025-02-13T20:21:37.118111225Z" level=info msg="RemovePodSandbox \"79f876a38e7f6d3ff2f569c588e6351ef97036e2f43397ae9b95ab81d985f4e3\" returns successfully" Feb 13 20:21:37.120927 containerd[1477]: time="2025-02-13T20:21:37.120320231Z" level=info msg="StopPodSandbox for \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\"" Feb 13 20:21:37.446092 containerd[1477]: 2025-02-13 20:21:37.273 [WARNING][4205] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0", GenerateName:"calico-kube-controllers-5d558c6c6c-", Namespace:"calico-system", SelfLink:"", UID:"8386e5da-6e1b-4bc1-b820-a5872769500e", ResourceVersion:"1317", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 20, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d558c6c6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.133.95", ContainerID:"25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe", Pod:"calico-kube-controllers-5d558c6c6c-njt6l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali17638900755", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:21:37.446092 containerd[1477]: 2025-02-13 20:21:37.274 [INFO][4205] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Feb 13 20:21:37.446092 containerd[1477]: 2025-02-13 20:21:37.274 [INFO][4205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" iface="eth0" netns="" Feb 13 20:21:37.446092 containerd[1477]: 2025-02-13 20:21:37.274 [INFO][4205] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Feb 13 20:21:37.446092 containerd[1477]: 2025-02-13 20:21:37.274 [INFO][4205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Feb 13 20:21:37.446092 containerd[1477]: 2025-02-13 20:21:37.387 [INFO][4211] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" HandleID="k8s-pod-network.dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Workload="64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0" Feb 13 20:21:37.446092 containerd[1477]: 2025-02-13 20:21:37.387 [INFO][4211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:21:37.446092 containerd[1477]: 2025-02-13 20:21:37.387 [INFO][4211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:21:37.446092 containerd[1477]: 2025-02-13 20:21:37.421 [WARNING][4211] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" HandleID="k8s-pod-network.dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Workload="64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0" Feb 13 20:21:37.446092 containerd[1477]: 2025-02-13 20:21:37.421 [INFO][4211] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" HandleID="k8s-pod-network.dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Workload="64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0" Feb 13 20:21:37.446092 containerd[1477]: 2025-02-13 20:21:37.436 [INFO][4211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:21:37.446092 containerd[1477]: 2025-02-13 20:21:37.443 [INFO][4205] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Feb 13 20:21:37.447303 containerd[1477]: time="2025-02-13T20:21:37.446169074Z" level=info msg="TearDown network for sandbox \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\" successfully" Feb 13 20:21:37.447303 containerd[1477]: time="2025-02-13T20:21:37.446209047Z" level=info msg="StopPodSandbox for \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\" returns successfully" Feb 13 20:21:37.447303 containerd[1477]: time="2025-02-13T20:21:37.447112663Z" level=info msg="RemovePodSandbox for \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\"" Feb 13 20:21:37.447303 containerd[1477]: time="2025-02-13T20:21:37.447155758Z" level=info msg="Forcibly stopping sandbox \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\"" Feb 13 20:21:37.799246 containerd[1477]: 2025-02-13 20:21:37.590 [WARNING][4229] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0", GenerateName:"calico-kube-controllers-5d558c6c6c-", Namespace:"calico-system", SelfLink:"", UID:"8386e5da-6e1b-4bc1-b820-a5872769500e", ResourceVersion:"1317", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 20, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d558c6c6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.133.95", ContainerID:"25820df07cd3360539f1e3eb5c0b12a356dce1e68945d189ecf3d8969ef79abe", Pod:"calico-kube-controllers-5d558c6c6c-njt6l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali17638900755", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:21:37.799246 containerd[1477]: 2025-02-13 20:21:37.591 [INFO][4229] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Feb 13 20:21:37.799246 containerd[1477]: 2025-02-13 20:21:37.591 [INFO][4229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" iface="eth0" netns="" Feb 13 20:21:37.799246 containerd[1477]: 2025-02-13 20:21:37.591 [INFO][4229] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Feb 13 20:21:37.799246 containerd[1477]: 2025-02-13 20:21:37.591 [INFO][4229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Feb 13 20:21:37.799246 containerd[1477]: 2025-02-13 20:21:37.701 [INFO][4235] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" HandleID="k8s-pod-network.dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Workload="64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0" Feb 13 20:21:37.799246 containerd[1477]: 2025-02-13 20:21:37.701 [INFO][4235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:21:37.799246 containerd[1477]: 2025-02-13 20:21:37.701 [INFO][4235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:21:37.799246 containerd[1477]: 2025-02-13 20:21:37.730 [WARNING][4235] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" HandleID="k8s-pod-network.dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Workload="64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0" Feb 13 20:21:37.799246 containerd[1477]: 2025-02-13 20:21:37.731 [INFO][4235] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" HandleID="k8s-pod-network.dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Workload="64.23.133.95-k8s-calico--kube--controllers--5d558c6c6c--njt6l-eth0" Feb 13 20:21:37.799246 containerd[1477]: 2025-02-13 20:21:37.757 [INFO][4235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:21:37.799246 containerd[1477]: 2025-02-13 20:21:37.762 [INFO][4229] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286" Feb 13 20:21:37.799246 containerd[1477]: time="2025-02-13T20:21:37.771396323Z" level=info msg="TearDown network for sandbox \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\" successfully" Feb 13 20:21:37.819460 containerd[1477]: time="2025-02-13T20:21:37.818162771Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:21:37.819460 containerd[1477]: time="2025-02-13T20:21:37.818292612Z" level=info msg="RemovePodSandbox \"dafeba040e105c6fdf0fc6d3a1431d83ad2c6ebb8b5e2a31bd3b40a6eda5f286\" returns successfully" Feb 13 20:21:37.819460 containerd[1477]: time="2025-02-13T20:21:37.819276092Z" level=info msg="StopPodSandbox for \"6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1\"" Feb 13 20:21:37.819460 containerd[1477]: time="2025-02-13T20:21:37.819389444Z" level=info msg="TearDown network for sandbox \"6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1\" successfully" Feb 13 20:21:37.819460 containerd[1477]: time="2025-02-13T20:21:37.819401726Z" level=info msg="StopPodSandbox for \"6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1\" returns successfully" Feb 13 20:21:37.820109 containerd[1477]: time="2025-02-13T20:21:37.819996891Z" level=info msg="RemovePodSandbox for \"6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1\"" Feb 13 20:21:37.820109 containerd[1477]: time="2025-02-13T20:21:37.820032476Z" level=info msg="Forcibly stopping sandbox \"6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1\"" Feb 13 20:21:37.820213 containerd[1477]: time="2025-02-13T20:21:37.820115379Z" level=info msg="TearDown network for sandbox \"6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1\" successfully" Feb 13 20:21:37.835625 containerd[1477]: time="2025-02-13T20:21:37.835320355Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:21:37.835625 containerd[1477]: time="2025-02-13T20:21:37.835445375Z" level=info msg="RemovePodSandbox \"6dc08aa08814f834b77e528abf437f97d1e8c43bd62922feb06d386f2bd218b1\" returns successfully" Feb 13 20:21:37.836750 containerd[1477]: time="2025-02-13T20:21:37.836431652Z" level=info msg="StopPodSandbox for \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\"" Feb 13 20:21:37.919082 kubelet[1780]: E0213 20:21:37.918403 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:38.090544 containerd[1477]: 2025-02-13 20:21:37.951 [WARNING][4253] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.133.95-k8s-csi--node--driver--hvn65-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce22ba38-b4f8-4031-88e9-0196a2ef8f62", ResourceVersion:"1363", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 20, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.133.95", ContainerID:"802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02", Pod:"csi-node-driver-hvn65", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6c47fd70776", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:21:38.090544 containerd[1477]: 2025-02-13 20:21:37.951 [INFO][4253] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Feb 13 20:21:38.090544 containerd[1477]: 2025-02-13 20:21:37.951 [INFO][4253] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" iface="eth0" netns="" Feb 13 20:21:38.090544 containerd[1477]: 2025-02-13 20:21:37.951 [INFO][4253] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Feb 13 20:21:38.090544 containerd[1477]: 2025-02-13 20:21:37.951 [INFO][4253] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Feb 13 20:21:38.090544 containerd[1477]: 2025-02-13 20:21:38.059 [INFO][4260] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" HandleID="k8s-pod-network.af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Workload="64.23.133.95-k8s-csi--node--driver--hvn65-eth0" Feb 13 20:21:38.090544 containerd[1477]: 2025-02-13 20:21:38.059 [INFO][4260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:21:38.090544 containerd[1477]: 2025-02-13 20:21:38.060 [INFO][4260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:21:38.090544 containerd[1477]: 2025-02-13 20:21:38.077 [WARNING][4260] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" HandleID="k8s-pod-network.af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Workload="64.23.133.95-k8s-csi--node--driver--hvn65-eth0" Feb 13 20:21:38.090544 containerd[1477]: 2025-02-13 20:21:38.077 [INFO][4260] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" HandleID="k8s-pod-network.af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Workload="64.23.133.95-k8s-csi--node--driver--hvn65-eth0" Feb 13 20:21:38.090544 containerd[1477]: 2025-02-13 20:21:38.085 [INFO][4260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:21:38.090544 containerd[1477]: 2025-02-13 20:21:38.087 [INFO][4253] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Feb 13 20:21:38.092402 containerd[1477]: time="2025-02-13T20:21:38.091562967Z" level=info msg="TearDown network for sandbox \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\" successfully" Feb 13 20:21:38.092402 containerd[1477]: time="2025-02-13T20:21:38.091644025Z" level=info msg="StopPodSandbox for \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\" returns successfully" Feb 13 20:21:38.093917 containerd[1477]: time="2025-02-13T20:21:38.093344053Z" level=info msg="RemovePodSandbox for \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\"" Feb 13 20:21:38.093917 containerd[1477]: time="2025-02-13T20:21:38.093422745Z" level=info msg="Forcibly stopping sandbox \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\"" Feb 13 20:21:38.351962 containerd[1477]: 2025-02-13 20:21:38.229 [WARNING][4279] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.133.95-k8s-csi--node--driver--hvn65-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce22ba38-b4f8-4031-88e9-0196a2ef8f62", ResourceVersion:"1363", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 20, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.133.95", ContainerID:"802657cb0e4df934d6b1a2cf3ab3fbc2668cc0b32591013e0d8f997766cbea02", Pod:"csi-node-driver-hvn65", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6c47fd70776", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:21:38.351962 containerd[1477]: 2025-02-13 20:21:38.231 [INFO][4279] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Feb 13 20:21:38.351962 containerd[1477]: 2025-02-13 20:21:38.231 [INFO][4279] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" iface="eth0" netns="" Feb 13 20:21:38.351962 containerd[1477]: 2025-02-13 20:21:38.231 [INFO][4279] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Feb 13 20:21:38.351962 containerd[1477]: 2025-02-13 20:21:38.231 [INFO][4279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Feb 13 20:21:38.351962 containerd[1477]: 2025-02-13 20:21:38.317 [INFO][4286] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" HandleID="k8s-pod-network.af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Workload="64.23.133.95-k8s-csi--node--driver--hvn65-eth0" Feb 13 20:21:38.351962 containerd[1477]: 2025-02-13 20:21:38.317 [INFO][4286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:21:38.351962 containerd[1477]: 2025-02-13 20:21:38.317 [INFO][4286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:21:38.351962 containerd[1477]: 2025-02-13 20:21:38.339 [WARNING][4286] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" HandleID="k8s-pod-network.af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Workload="64.23.133.95-k8s-csi--node--driver--hvn65-eth0" Feb 13 20:21:38.351962 containerd[1477]: 2025-02-13 20:21:38.340 [INFO][4286] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" HandleID="k8s-pod-network.af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Workload="64.23.133.95-k8s-csi--node--driver--hvn65-eth0" Feb 13 20:21:38.351962 containerd[1477]: 2025-02-13 20:21:38.343 [INFO][4286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:21:38.351962 containerd[1477]: 2025-02-13 20:21:38.347 [INFO][4279] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2" Feb 13 20:21:38.355116 containerd[1477]: time="2025-02-13T20:21:38.353242992Z" level=info msg="TearDown network for sandbox \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\" successfully" Feb 13 20:21:38.373787 containerd[1477]: time="2025-02-13T20:21:38.373708579Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:21:38.374741 containerd[1477]: time="2025-02-13T20:21:38.374077974Z" level=info msg="RemovePodSandbox \"af5dda4a83fa00007e6a9c47e25a462b7dc8663dbdca1669f97d7f91766b67d2\" returns successfully" Feb 13 20:21:38.395824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount16629141.mount: Deactivated successfully. Feb 13 20:21:38.478828 systemd[1]: run-containerd-runc-k8s.io-c86c3680c579353b4a1cea86e5cca9dd6960c4ad349a76e979d94ce686eb62b8-runc.jKh6UL.mount: Deactivated successfully. Feb 13 20:21:38.922042 kubelet[1780]: E0213 20:21:38.919400 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:39.922671 kubelet[1780]: E0213 20:21:39.922590 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:40.924985 kubelet[1780]: E0213 20:21:40.924788 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:41.925502 kubelet[1780]: E0213 20:21:41.925421 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:42.926514 kubelet[1780]: E0213 20:21:42.926435 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:43.345840 containerd[1477]: time="2025-02-13T20:21:43.345628485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:43.351897 containerd[1477]: time="2025-02-13T20:21:43.350871248Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Feb 13 20:21:43.351897 containerd[1477]: time="2025-02-13T20:21:43.351225433Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:43.356094 containerd[1477]: time="2025-02-13T20:21:43.356025729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:43.359990 containerd[1477]: time="2025-02-13T20:21:43.359442368Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 12.88622307s" Feb 13 20:21:43.359990 containerd[1477]: time="2025-02-13T20:21:43.359519997Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 20:21:43.367553 containerd[1477]: time="2025-02-13T20:21:43.367462671Z" level=info msg="CreateContainer within sandbox \"7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 20:21:43.400514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1295887438.mount: Deactivated successfully. Feb 13 20:21:43.413527 containerd[1477]: time="2025-02-13T20:21:43.413109012Z" level=info msg="CreateContainer within sandbox \"7039078c1916affc62e0aa62cb433027e225a0d3576c41fa58ef79e3c5d9804f\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"f362ab671d628482d520cea7285a9ab8419680bfce75203474f741c43d176de5\"" Feb 13 20:21:43.414570 containerd[1477]: time="2025-02-13T20:21:43.414504098Z" level=info msg="StartContainer for \"f362ab671d628482d520cea7285a9ab8419680bfce75203474f741c43d176de5\"" Feb 13 20:21:43.474270 systemd[1]: Started cri-containerd-f362ab671d628482d520cea7285a9ab8419680bfce75203474f741c43d176de5.scope - libcontainer container f362ab671d628482d520cea7285a9ab8419680bfce75203474f741c43d176de5. Feb 13 20:21:43.531082 containerd[1477]: time="2025-02-13T20:21:43.531001956Z" level=info msg="StartContainer for \"f362ab671d628482d520cea7285a9ab8419680bfce75203474f741c43d176de5\" returns successfully" Feb 13 20:21:43.790523 kubelet[1780]: I0213 20:21:43.790176 1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.901507343 podStartE2EDuration="14.790146738s" podCreationTimestamp="2025-02-13 20:21:29 +0000 UTC" firstStartedPulling="2025-02-13 20:21:30.472529023 +0000 UTC m=+55.869987524" lastFinishedPulling="2025-02-13 20:21:43.361168412 +0000 UTC m=+68.758626919" observedRunningTime="2025-02-13 20:21:43.778306364 +0000 UTC m=+69.175764897" watchObservedRunningTime="2025-02-13 20:21:43.790146738 +0000 UTC m=+69.187605271" Feb 13 20:21:43.926808 kubelet[1780]: E0213 20:21:43.926727 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:44.928662 kubelet[1780]: E0213 20:21:44.928409 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:45.929592 kubelet[1780]: E0213 20:21:45.929497 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:46.930915 kubelet[1780]: E0213 20:21:46.930630 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:47.033652 kubelet[1780]: E0213 20:21:47.033584 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:21:47.931623 kubelet[1780]: E0213 20:21:47.931538 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:48.932591 kubelet[1780]: E0213 20:21:48.932355 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:49.933661 kubelet[1780]: E0213 20:21:49.933517 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:50.934769 kubelet[1780]: E0213 20:21:50.934690 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:51.936565 kubelet[1780]: E0213 20:21:51.935939 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:52.936253 kubelet[1780]: E0213 20:21:52.936161 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:53.937462 kubelet[1780]: E0213 20:21:53.937382 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:54.937985 kubelet[1780]: E0213 20:21:54.937898 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:55.821930 kubelet[1780]: E0213 20:21:55.821826 1780 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:55.938791 kubelet[1780]: E0213 20:21:55.938692 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:56.939100 kubelet[1780]: E0213 20:21:56.939004 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:57.939485 kubelet[1780]: E0213 20:21:57.939411 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:58.940040 kubelet[1780]: E0213 20:21:58.939910 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:21:59.941161 kubelet[1780]: E0213 20:21:59.941003 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:00.942242 kubelet[1780]: E0213 20:22:00.941667 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:01.942510 kubelet[1780]: E0213 20:22:01.942415 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:02.943262 kubelet[1780]: E0213 20:22:02.943167 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:03.577118 systemd[1]: run-containerd-runc-k8s.io-19d5789cfe81226a58f4adfb8955d36496a00dcb61de44d7070f1c7ce4b0faff-runc.BsIrBn.mount: Deactivated successfully. Feb 13 20:22:03.945100 kubelet[1780]: E0213 20:22:03.943929 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:04.945120 kubelet[1780]: E0213 20:22:04.945029 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:05.945482 kubelet[1780]: E0213 20:22:05.945387 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:06.945661 kubelet[1780]: E0213 20:22:06.945569 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:07.870762 kubelet[1780]: I0213 20:22:07.870669 1780 topology_manager.go:215] "Topology Admit Handler" podUID="a5b53787-032e-4500-9b46-1868989b2fe0" podNamespace="default" podName="test-pod-1" Feb 13 20:22:07.883232 systemd[1]: Created slice kubepods-besteffort-poda5b53787_032e_4500_9b46_1868989b2fe0.slice - libcontainer container kubepods-besteffort-poda5b53787_032e_4500_9b46_1868989b2fe0.slice. Feb 13 20:22:07.946722 kubelet[1780]: E0213 20:22:07.946640 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:08.050464 kubelet[1780]: I0213 20:22:08.049991 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-618be390-e561-4cc2-9b8f-3f17996a2aa1\" (UniqueName: \"kubernetes.io/nfs/a5b53787-032e-4500-9b46-1868989b2fe0-pvc-618be390-e561-4cc2-9b8f-3f17996a2aa1\") pod \"test-pod-1\" (UID: \"a5b53787-032e-4500-9b46-1868989b2fe0\") " pod="default/test-pod-1" Feb 13 20:22:08.050464 kubelet[1780]: I0213 20:22:08.050071 1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsc5w\" (UniqueName: \"kubernetes.io/projected/a5b53787-032e-4500-9b46-1868989b2fe0-kube-api-access-rsc5w\") pod \"test-pod-1\" (UID: \"a5b53787-032e-4500-9b46-1868989b2fe0\") " pod="default/test-pod-1" Feb 13 20:22:08.259716 kernel: FS-Cache: Loaded Feb 13 20:22:08.401206 kernel: RPC: Registered named UNIX socket transport module. Feb 13 20:22:08.401420 kernel: RPC: Registered udp transport module. Feb 13 20:22:08.401471 kernel: RPC: Registered tcp transport module. Feb 13 20:22:08.401510 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 20:22:08.409469 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 20:22:08.579066 kubelet[1780]: E0213 20:22:08.579014 1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:22:08.951162 kubelet[1780]: E0213 20:22:08.947880 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:09.015106 kernel: NFS: Registering the id_resolver key type Feb 13 20:22:09.018429 kernel: Key type id_resolver registered Feb 13 20:22:09.028616 kernel: Key type id_legacy registered Feb 13 20:22:09.121759 nfsidmap[4489]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.1-6-23070f926e' Feb 13 20:22:09.131083 nfsidmap[4490]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.1-6-23070f926e' Feb 13 20:22:09.391519 containerd[1477]: time="2025-02-13T20:22:09.391328967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a5b53787-032e-4500-9b46-1868989b2fe0,Namespace:default,Attempt:0,}" Feb 13 20:22:09.685765 systemd-networkd[1377]: cali5ec59c6bf6e: Link UP Feb 13 20:22:09.690750 systemd-networkd[1377]: cali5ec59c6bf6e: Gained carrier Feb 13 20:22:09.712356 containerd[1477]: 2025-02-13 20:22:09.514 [INFO][4491] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {64.23.133.95-k8s-test--pod--1-eth0 default a5b53787-032e-4500-9b46-1868989b2fe0 1508 0 2025-02-13 20:21:30 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 64.23.133.95 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="64.23.133.95-k8s-test--pod--1-" Feb 13 20:22:09.712356 containerd[1477]: 2025-02-13 20:22:09.514 [INFO][4491] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="64.23.133.95-k8s-test--pod--1-eth0" Feb 13 20:22:09.712356 containerd[1477]: 2025-02-13 20:22:09.590 [INFO][4503] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5" HandleID="k8s-pod-network.832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5" Workload="64.23.133.95-k8s-test--pod--1-eth0" Feb 13 20:22:09.712356 containerd[1477]: 2025-02-13 20:22:09.613 [INFO][4503] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5" HandleID="k8s-pod-network.832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5" Workload="64.23.133.95-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318160), Attrs:map[string]string{"namespace":"default", "node":"64.23.133.95", "pod":"test-pod-1", "timestamp":"2025-02-13 20:22:09.590909352 +0000 UTC"}, Hostname:"64.23.133.95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:22:09.712356 containerd[1477]: 2025-02-13 20:22:09.614 [INFO][4503] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:22:09.712356 containerd[1477]: 2025-02-13 20:22:09.614 [INFO][4503] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:22:09.712356 containerd[1477]: 2025-02-13 20:22:09.614 [INFO][4503] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '64.23.133.95' Feb 13 20:22:09.712356 containerd[1477]: 2025-02-13 20:22:09.618 [INFO][4503] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5" host="64.23.133.95" Feb 13 20:22:09.712356 containerd[1477]: 2025-02-13 20:22:09.629 [INFO][4503] ipam/ipam.go 372: Looking up existing affinities for host host="64.23.133.95" Feb 13 20:22:09.712356 containerd[1477]: 2025-02-13 20:22:09.638 [INFO][4503] ipam/ipam.go 489: Trying affinity for 192.168.103.0/26 host="64.23.133.95" Feb 13 20:22:09.712356 containerd[1477]: 2025-02-13 20:22:09.642 [INFO][4503] ipam/ipam.go 155: Attempting to load block cidr=192.168.103.0/26 host="64.23.133.95" Feb 13 20:22:09.712356 containerd[1477]: 2025-02-13 20:22:09.647 [INFO][4503] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="64.23.133.95" Feb 13 20:22:09.712356 containerd[1477]: 2025-02-13 20:22:09.647 [INFO][4503] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5" host="64.23.133.95" Feb 13 20:22:09.712356 containerd[1477]: 2025-02-13 20:22:09.651 [INFO][4503] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5 Feb 13 20:22:09.712356 containerd[1477]: 2025-02-13 20:22:09.658 [INFO][4503] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5" host="64.23.133.95" Feb 13 20:22:09.712356 containerd[1477]: 2025-02-13 20:22:09.673 [INFO][4503] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.103.5/26] block=192.168.103.0/26 handle="k8s-pod-network.832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5" host="64.23.133.95" Feb 13 20:22:09.712356 containerd[1477]: 2025-02-13 20:22:09.674 [INFO][4503] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.103.5/26] handle="k8s-pod-network.832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5" host="64.23.133.95" Feb 13 20:22:09.712356 containerd[1477]: 2025-02-13 20:22:09.674 [INFO][4503] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:22:09.712356 containerd[1477]: 2025-02-13 20:22:09.674 [INFO][4503] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.5/26] IPv6=[] ContainerID="832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5" HandleID="k8s-pod-network.832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5" Workload="64.23.133.95-k8s-test--pod--1-eth0" Feb 13 20:22:09.712356 containerd[1477]: 2025-02-13 20:22:09.678 [INFO][4491] cni-plugin/k8s.go 386: Populated endpoint ContainerID="832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="64.23.133.95-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.133.95-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a5b53787-032e-4500-9b46-1868989b2fe0", ResourceVersion:"1508", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.133.95", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:22:09.716486 containerd[1477]: 2025-02-13 20:22:09.679 [INFO][4491] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.103.5/32] ContainerID="832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="64.23.133.95-k8s-test--pod--1-eth0" Feb 13 20:22:09.716486 containerd[1477]: 2025-02-13 20:22:09.679 [INFO][4491] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="64.23.133.95-k8s-test--pod--1-eth0" Feb 13 20:22:09.716486 containerd[1477]: 2025-02-13 20:22:09.689 [INFO][4491] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="64.23.133.95-k8s-test--pod--1-eth0" Feb 13 20:22:09.716486 containerd[1477]: 2025-02-13 20:22:09.689 [INFO][4491] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="64.23.133.95-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"64.23.133.95-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a5b53787-032e-4500-9b46-1868989b2fe0", ResourceVersion:"1508", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"64.23.133.95", ContainerID:"832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"0e:9f:b2:82:ea:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:22:09.716486 containerd[1477]: 2025-02-13 20:22:09.705 [INFO][4491] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="64.23.133.95-k8s-test--pod--1-eth0" Feb 13 20:22:09.780094 containerd[1477]: time="2025-02-13T20:22:09.779642623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:22:09.780094 containerd[1477]: time="2025-02-13T20:22:09.779745367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:22:09.780094 containerd[1477]: time="2025-02-13T20:22:09.779788326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:22:09.780094 containerd[1477]: time="2025-02-13T20:22:09.779952586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:22:09.825293 systemd[1]: Started cri-containerd-832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5.scope - libcontainer container 832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5. Feb 13 20:22:09.908020 containerd[1477]: time="2025-02-13T20:22:09.907807735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a5b53787-032e-4500-9b46-1868989b2fe0,Namespace:default,Attempt:0,} returns sandbox id \"832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5\"" Feb 13 20:22:09.911998 containerd[1477]: time="2025-02-13T20:22:09.911548854Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 20:22:09.950009 kubelet[1780]: E0213 20:22:09.949795 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:10.405399 containerd[1477]: time="2025-02-13T20:22:10.403757192Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:10.408176 containerd[1477]: time="2025-02-13T20:22:10.408095428Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 20:22:10.417094 containerd[1477]: time="2025-02-13T20:22:10.417021220Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 505.395087ms" Feb 13 20:22:10.417330 containerd[1477]: time="2025-02-13T20:22:10.417303598Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 20:22:10.447459 containerd[1477]: time="2025-02-13T20:22:10.447390888Z" level=info msg="CreateContainer within sandbox \"832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 20:22:10.475076 containerd[1477]: time="2025-02-13T20:22:10.475012662Z" level=info msg="CreateContainer within sandbox \"832800154624a09693eb509208e2f21cb041acb0b084a9be1a04396d9c0b1ff5\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"04b565460720d6519162cff39995e280d59b373f241767718737ac85e389c555\"" Feb 13 20:22:10.477918 containerd[1477]: time="2025-02-13T20:22:10.476291816Z" level=info msg="StartContainer for \"04b565460720d6519162cff39995e280d59b373f241767718737ac85e389c555\"" Feb 13 20:22:10.535164 systemd[1]: Started cri-containerd-04b565460720d6519162cff39995e280d59b373f241767718737ac85e389c555.scope - libcontainer container 04b565460720d6519162cff39995e280d59b373f241767718737ac85e389c555. Feb 13 20:22:10.588153 containerd[1477]: time="2025-02-13T20:22:10.586656764Z" level=info msg="StartContainer for \"04b565460720d6519162cff39995e280d59b373f241767718737ac85e389c555\" returns successfully" Feb 13 20:22:10.880179 kubelet[1780]: I0213 20:22:10.880067 1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=40.372112458 podStartE2EDuration="40.880037439s" podCreationTimestamp="2025-02-13 20:21:30 +0000 UTC" firstStartedPulling="2025-02-13 20:22:09.910832623 +0000 UTC m=+95.308291122" lastFinishedPulling="2025-02-13 20:22:10.418757588 +0000 UTC m=+95.816216103" observedRunningTime="2025-02-13 20:22:10.879717353 +0000 UTC m=+96.277175890" watchObservedRunningTime="2025-02-13 20:22:10.880037439 +0000 UTC m=+96.277495972" Feb 13 20:22:10.962737 kubelet[1780]: E0213 20:22:10.951049 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:11.681763 systemd-networkd[1377]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 20:22:11.963236 kubelet[1780]: E0213 20:22:11.963038 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:12.964389 kubelet[1780]: E0213 20:22:12.964198 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:13.966306 kubelet[1780]: E0213 20:22:13.965593 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:14.965933 kubelet[1780]: E0213 20:22:14.965831 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:15.822305 kubelet[1780]: E0213 20:22:15.822212 1780 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:15.966908 kubelet[1780]: E0213 20:22:15.966772 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:16.967816 kubelet[1780]: E0213 20:22:16.967707 1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"