Feb 13 15:52:22.160581 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:06:02 -00 2025
Feb 13 15:52:22.160628 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05
Feb 13 15:52:22.160650 kernel: BIOS-provided physical RAM map:
Feb 13 15:52:22.160662 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb 13 15:52:22.160674 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb 13 15:52:22.160685 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb 13 15:52:22.160700 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable
Feb 13 15:52:22.160715 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved
Feb 13 15:52:22.160727 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Feb 13 15:52:22.160739 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb 13 15:52:22.160754 kernel: NX (Execute Disable) protection: active
Feb 13 15:52:22.160767 kernel: APIC: Static calls initialized
Feb 13 15:52:22.160779 kernel: SMBIOS 2.8 present.
Feb 13 15:52:22.160792 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017
Feb 13 15:52:22.160807 kernel: Hypervisor detected: KVM
Feb 13 15:52:22.160821 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb 13 15:52:22.160844 kernel: kvm-clock: using sched offset of 5000513002 cycles
Feb 13 15:52:22.160859 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb 13 15:52:22.160871 kernel: tsc: Detected 1999.999 MHz processor
Feb 13 15:52:22.160883 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb 13 15:52:22.160896 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb 13 15:52:22.160909 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000
Feb 13 15:52:22.160921 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Feb 13 15:52:22.160933 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb 13 15:52:22.160951 kernel: ACPI: Early table checksum verification disabled
Feb 13 15:52:22.160965 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS )
Feb 13 15:52:22.160978 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:52:22.160991 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:52:22.161005 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:52:22.161018 kernel: ACPI: FACS 0x000000007FFE0000 000040
Feb 13 15:52:22.161030 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:52:22.161042 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:52:22.161055 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:52:22.161072 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:52:22.161087 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd]
Feb 13 15:52:22.161099 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769]
Feb 13 15:52:22.161111 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f]
Feb 13 15:52:22.161124 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d]
Feb 13 15:52:22.161135 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895]
Feb 13 15:52:22.161147 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d]
Feb 13 15:52:22.161167 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985]
Feb 13 15:52:22.161182 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Feb 13 15:52:22.163301 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Feb 13 15:52:22.163340 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff]
Feb 13 15:52:22.163355 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff]
Feb 13 15:52:22.163369 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff]
Feb 13 15:52:22.163383 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff]
Feb 13 15:52:22.163407 kernel: Zone ranges:
Feb 13 15:52:22.163421 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb 13 15:52:22.163435 kernel:   DMA32    [mem 0x0000000001000000-0x000000007ffd7fff]
Feb 13 15:52:22.163449 kernel:   Normal   empty
Feb 13 15:52:22.163470 kernel: Movable zone start for each node
Feb 13 15:52:22.163496 kernel: Early memory node ranges
Feb 13 15:52:22.163517 kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb 13 15:52:22.163532 kernel:   node   0: [mem 0x0000000000100000-0x000000007ffd7fff]
Feb 13 15:52:22.163547 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff]
Feb 13 15:52:22.163566 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb 13 15:52:22.163580 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb 13 15:52:22.163594 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges
Feb 13 15:52:22.163607 kernel: ACPI: PM-Timer IO Port: 0x608
Feb 13 15:52:22.163619 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb 13 15:52:22.163631 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb 13 15:52:22.163644 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Feb 13 15:52:22.163655 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb 13 15:52:22.163669 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb 13 15:52:22.163688 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb 13 15:52:22.163702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb 13 15:52:22.163714 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb 13 15:52:22.163726 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Feb 13 15:52:22.163740 kernel: TSC deadline timer available
Feb 13 15:52:22.163753 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Feb 13 15:52:22.163766 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Feb 13 15:52:22.163779 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices
Feb 13 15:52:22.163797 kernel: Booting paravirtualized kernel on KVM
Feb 13 15:52:22.163812 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb 13 15:52:22.163821 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1
Feb 13 15:52:22.163830 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576
Feb 13 15:52:22.163839 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152
Feb 13 15:52:22.163847 kernel: pcpu-alloc: [0] 0 1 
Feb 13 15:52:22.163855 kernel: kvm-guest: PV spinlocks disabled, no host support
Feb 13 15:52:22.163867 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05
Feb 13 15:52:22.163876 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 13 15:52:22.163888 kernel: random: crng init done
Feb 13 15:52:22.163896 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb 13 15:52:22.163905 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb 13 15:52:22.163915 kernel: Fallback order for Node 0: 0 
Feb 13 15:52:22.163925 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 515800
Feb 13 15:52:22.163933 kernel: Policy zone: DMA32
Feb 13 15:52:22.163941 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 13 15:52:22.163950 kernel: Memory: 1969144K/2096600K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 127196K reserved, 0K cma-reserved)
Feb 13 15:52:22.163959 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Feb 13 15:52:22.163971 kernel: Kernel/User page tables isolation: enabled
Feb 13 15:52:22.163979 kernel: ftrace: allocating 37890 entries in 149 pages
Feb 13 15:52:22.163987 kernel: ftrace: allocated 149 pages with 4 groups
Feb 13 15:52:22.163996 kernel: Dynamic Preempt: voluntary
Feb 13 15:52:22.164008 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 13 15:52:22.164027 kernel: rcu:         RCU event tracing is enabled.
Feb 13 15:52:22.164041 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Feb 13 15:52:22.164056 kernel:         Trampoline variant of Tasks RCU enabled.
Feb 13 15:52:22.164067 kernel:         Rude variant of Tasks RCU enabled.
Feb 13 15:52:22.164080 kernel:         Tracing variant of Tasks RCU enabled.
Feb 13 15:52:22.164089 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 13 15:52:22.164099 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Feb 13 15:52:22.164108 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
Feb 13 15:52:22.164125 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 13 15:52:22.164134 kernel: Console: colour VGA+ 80x25
Feb 13 15:52:22.164142 kernel: printk: console [tty0] enabled
Feb 13 15:52:22.164150 kernel: printk: console [ttyS0] enabled
Feb 13 15:52:22.164158 kernel: ACPI: Core revision 20230628
Feb 13 15:52:22.164167 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns
Feb 13 15:52:22.164178 kernel: APIC: Switch to symmetric I/O mode setup
Feb 13 15:52:22.164186 kernel: x2apic enabled
Feb 13 15:52:22.164194 kernel: APIC: Switched APIC routing to: physical x2apic
Feb 13 15:52:22.164251 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Feb 13 15:52:22.164265 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns
Feb 13 15:52:22.164278 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999)
Feb 13 15:52:22.164295 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
Feb 13 15:52:22.164309 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
Feb 13 15:52:22.164339 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb 13 15:52:22.164354 kernel: Spectre V2 : Mitigation: Retpolines
Feb 13 15:52:22.164363 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Feb 13 15:52:22.164375 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Feb 13 15:52:22.164384 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Feb 13 15:52:22.164393 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb 13 15:52:22.164402 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Feb 13 15:52:22.164411 kernel: MDS: Mitigation: Clear CPU buffers
Feb 13 15:52:22.164420 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Feb 13 15:52:22.164438 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 13 15:52:22.164447 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 13 15:52:22.164456 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 13 15:52:22.164465 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 13 15:52:22.164492 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Feb 13 15:52:22.164501 kernel: Freeing SMP alternatives memory: 32K
Feb 13 15:52:22.164510 kernel: pid_max: default: 32768 minimum: 301
Feb 13 15:52:22.164519 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Feb 13 15:52:22.164530 kernel: landlock: Up and running.
Feb 13 15:52:22.164539 kernel: SELinux:  Initializing.
Feb 13 15:52:22.164548 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Feb 13 15:52:22.164557 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Feb 13 15:52:22.164566 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1)
Feb 13 15:52:22.164575 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 15:52:22.164584 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 15:52:22.164593 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 15:52:22.164605 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only.
Feb 13 15:52:22.164614 kernel: signal: max sigframe size: 1776
Feb 13 15:52:22.164623 kernel: rcu: Hierarchical SRCU implementation.
Feb 13 15:52:22.164632 kernel: rcu:         Max phase no-delay instances is 400.
Feb 13 15:52:22.164641 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Feb 13 15:52:22.164649 kernel: smp: Bringing up secondary CPUs ...
Feb 13 15:52:22.164658 kernel: smpboot: x86: Booting SMP configuration:
Feb 13 15:52:22.164667 kernel: .... node  #0, CPUs:      #1
Feb 13 15:52:22.164675 kernel: smp: Brought up 1 node, 2 CPUs
Feb 13 15:52:22.164685 kernel: smpboot: Max logical packages: 1
Feb 13 15:52:22.164711 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS)
Feb 13 15:52:22.164721 kernel: devtmpfs: initialized
Feb 13 15:52:22.164736 kernel: x86/mm: Memory block size: 128MB
Feb 13 15:52:22.164750 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 13 15:52:22.164762 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Feb 13 15:52:22.164776 kernel: pinctrl core: initialized pinctrl subsystem
Feb 13 15:52:22.164788 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 13 15:52:22.164801 kernel: audit: initializing netlink subsys (disabled)
Feb 13 15:52:22.164814 kernel: audit: type=2000 audit(1739461941.268:1): state=initialized audit_enabled=0 res=1
Feb 13 15:52:22.164834 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 13 15:52:22.164847 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb 13 15:52:22.164860 kernel: cpuidle: using governor menu
Feb 13 15:52:22.164895 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 13 15:52:22.164907 kernel: dca service started, version 1.12.1
Feb 13 15:52:22.164920 kernel: PCI: Using configuration type 1 for base access
Feb 13 15:52:22.164933 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb 13 15:52:22.164947 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 13 15:52:22.164960 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Feb 13 15:52:22.164980 kernel: ACPI: Added _OSI(Module Device)
Feb 13 15:52:22.164995 kernel: ACPI: Added _OSI(Processor Device)
Feb 13 15:52:22.165008 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 13 15:52:22.165021 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 13 15:52:22.165030 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 13 15:52:22.165039 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Feb 13 15:52:22.165047 kernel: ACPI: Interpreter enabled
Feb 13 15:52:22.165056 kernel: ACPI: PM: (supports S0 S5)
Feb 13 15:52:22.165065 kernel: ACPI: Using IOAPIC for interrupt routing
Feb 13 15:52:22.165079 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb 13 15:52:22.165095 kernel: PCI: Using E820 reservations for host bridge windows
Feb 13 15:52:22.165108 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Feb 13 15:52:22.165117 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 13 15:52:22.167524 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
Feb 13 15:52:22.167976 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI]
Feb 13 15:52:22.168095 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
Feb 13 15:52:22.168116 kernel: acpiphp: Slot [3] registered
Feb 13 15:52:22.168125 kernel: acpiphp: Slot [4] registered
Feb 13 15:52:22.168134 kernel: acpiphp: Slot [5] registered
Feb 13 15:52:22.168143 kernel: acpiphp: Slot [6] registered
Feb 13 15:52:22.168152 kernel: acpiphp: Slot [7] registered
Feb 13 15:52:22.168161 kernel: acpiphp: Slot [8] registered
Feb 13 15:52:22.168170 kernel: acpiphp: Slot [9] registered
Feb 13 15:52:22.168179 kernel: acpiphp: Slot [10] registered
Feb 13 15:52:22.168188 kernel: acpiphp: Slot [11] registered
Feb 13 15:52:22.168199 kernel: acpiphp: Slot [12] registered
Feb 13 15:52:22.168235 kernel: acpiphp: Slot [13] registered
Feb 13 15:52:22.168243 kernel: acpiphp: Slot [14] registered
Feb 13 15:52:22.168252 kernel: acpiphp: Slot [15] registered
Feb 13 15:52:22.168261 kernel: acpiphp: Slot [16] registered
Feb 13 15:52:22.168270 kernel: acpiphp: Slot [17] registered
Feb 13 15:52:22.168279 kernel: acpiphp: Slot [18] registered
Feb 13 15:52:22.168288 kernel: acpiphp: Slot [19] registered
Feb 13 15:52:22.168296 kernel: acpiphp: Slot [20] registered
Feb 13 15:52:22.168305 kernel: acpiphp: Slot [21] registered
Feb 13 15:52:22.168318 kernel: acpiphp: Slot [22] registered
Feb 13 15:52:22.168326 kernel: acpiphp: Slot [23] registered
Feb 13 15:52:22.168335 kernel: acpiphp: Slot [24] registered
Feb 13 15:52:22.168343 kernel: acpiphp: Slot [25] registered
Feb 13 15:52:22.168352 kernel: acpiphp: Slot [26] registered
Feb 13 15:52:22.168361 kernel: acpiphp: Slot [27] registered
Feb 13 15:52:22.168369 kernel: acpiphp: Slot [28] registered
Feb 13 15:52:22.168378 kernel: acpiphp: Slot [29] registered
Feb 13 15:52:22.168386 kernel: acpiphp: Slot [30] registered
Feb 13 15:52:22.168394 kernel: acpiphp: Slot [31] registered
Feb 13 15:52:22.168406 kernel: PCI host bridge to bus 0000:00
Feb 13 15:52:22.168551 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb 13 15:52:22.168645 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb 13 15:52:22.168732 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb 13 15:52:22.168817 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window]
Feb 13 15:52:22.168937 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window]
Feb 13 15:52:22.169031 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 13 15:52:22.169236 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Feb 13 15:52:22.169444 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Feb 13 15:52:22.169574 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Feb 13 15:52:22.169713 kernel: pci 0000:00:01.1: reg 0x20: [io  0xc1e0-0xc1ef]
Feb 13 15:52:22.169820 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Feb 13 15:52:22.169955 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Feb 13 15:52:22.170120 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Feb 13 15:52:22.170287 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Feb 13 15:52:22.170480 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300
Feb 13 15:52:22.170639 kernel: pci 0000:00:01.2: reg 0x20: [io  0xc180-0xc19f]
Feb 13 15:52:22.170780 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Feb 13 15:52:22.170901 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Feb 13 15:52:22.171035 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Feb 13 15:52:22.171493 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000
Feb 13 15:52:22.171661 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref]
Feb 13 15:52:22.171819 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref]
Feb 13 15:52:22.171930 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff]
Feb 13 15:52:22.172029 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref]
Feb 13 15:52:22.172144 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb 13 15:52:22.172419 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000
Feb 13 15:52:22.172561 kernel: pci 0000:00:03.0: reg 0x10: [io  0xc1a0-0xc1bf]
Feb 13 15:52:22.172673 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff]
Feb 13 15:52:22.172791 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref]
Feb 13 15:52:22.172963 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000
Feb 13 15:52:22.173066 kernel: pci 0000:00:04.0: reg 0x10: [io  0xc1c0-0xc1df]
Feb 13 15:52:22.173178 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff]
Feb 13 15:52:22.173610 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref]
Feb 13 15:52:22.173780 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000
Feb 13 15:52:22.173926 kernel: pci 0000:00:05.0: reg 0x10: [io  0xc100-0xc13f]
Feb 13 15:52:22.174072 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff]
Feb 13 15:52:22.174245 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref]
Feb 13 15:52:22.174425 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000
Feb 13 15:52:22.174579 kernel: pci 0000:00:06.0: reg 0x10: [io  0xc000-0xc07f]
Feb 13 15:52:22.174742 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff]
Feb 13 15:52:22.174934 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref]
Feb 13 15:52:22.175112 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000
Feb 13 15:52:22.176119 kernel: pci 0000:00:07.0: reg 0x10: [io  0xc080-0xc0ff]
Feb 13 15:52:22.176326 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff]
Feb 13 15:52:22.176501 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref]
Feb 13 15:52:22.176710 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00
Feb 13 15:52:22.176858 kernel: pci 0000:00:08.0: reg 0x10: [io  0xc140-0xc17f]
Feb 13 15:52:22.176965 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref]
Feb 13 15:52:22.176978 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb 13 15:52:22.176988 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb 13 15:52:22.176997 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb 13 15:52:22.177007 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb 13 15:52:22.177017 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb 13 15:52:22.177031 kernel: iommu: Default domain type: Translated
Feb 13 15:52:22.177040 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Feb 13 15:52:22.177049 kernel: PCI: Using ACPI for IRQ routing
Feb 13 15:52:22.177058 kernel: PCI: pci_cache_line_size set to 64 bytes
Feb 13 15:52:22.177067 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Feb 13 15:52:22.177076 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff]
Feb 13 15:52:22.177268 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Feb 13 15:52:22.177438 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Feb 13 15:52:22.177598 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb 13 15:52:22.177616 kernel: vgaarb: loaded
Feb 13 15:52:22.177629 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
Feb 13 15:52:22.177642 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter
Feb 13 15:52:22.177655 kernel: clocksource: Switched to clocksource kvm-clock
Feb 13 15:52:22.177668 kernel: VFS: Disk quotas dquot_6.6.0
Feb 13 15:52:22.177682 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 13 15:52:22.177694 kernel: pnp: PnP ACPI init
Feb 13 15:52:22.177708 kernel: pnp: PnP ACPI: found 4 devices
Feb 13 15:52:22.177727 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb 13 15:52:22.177740 kernel: NET: Registered PF_INET protocol family
Feb 13 15:52:22.177752 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 13 15:52:22.177767 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
Feb 13 15:52:22.177783 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 13 15:52:22.177799 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 13 15:52:22.177816 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear)
Feb 13 15:52:22.177832 kernel: TCP: Hash tables configured (established 16384 bind 16384)
Feb 13 15:52:22.177848 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
Feb 13 15:52:22.177867 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
Feb 13 15:52:22.177882 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 13 15:52:22.177897 kernel: NET: Registered PF_XDP protocol family
Feb 13 15:52:22.178091 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb 13 15:52:22.178337 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb 13 15:52:22.178478 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb 13 15:52:22.178618 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window]
Feb 13 15:52:22.178757 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window]
Feb 13 15:52:22.178946 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Feb 13 15:52:22.179110 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb 13 15:52:22.179133 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb 13 15:52:22.179315 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 43793 usecs
Feb 13 15:52:22.179333 kernel: PCI: CLS 0 bytes, default 64
Feb 13 15:52:22.179347 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Feb 13 15:52:22.179360 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns
Feb 13 15:52:22.179375 kernel: Initialise system trusted keyrings
Feb 13 15:52:22.179392 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0
Feb 13 15:52:22.179421 kernel: Key type asymmetric registered
Feb 13 15:52:22.179436 kernel: Asymmetric key parser 'x509' registered
Feb 13 15:52:22.179452 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
Feb 13 15:52:22.179470 kernel: io scheduler mq-deadline registered
Feb 13 15:52:22.179488 kernel: io scheduler kyber registered
Feb 13 15:52:22.179501 kernel: io scheduler bfq registered
Feb 13 15:52:22.179513 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Feb 13 15:52:22.179528 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Feb 13 15:52:22.179540 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb 13 15:52:22.179560 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb 13 15:52:22.179574 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 13 15:52:22.179589 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb 13 15:52:22.179604 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb 13 15:52:22.179618 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb 13 15:52:22.179630 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb 13 15:52:22.179827 kernel: rtc_cmos 00:03: RTC can wake from S4
Feb 13 15:52:22.179854 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Feb 13 15:52:22.179958 kernel: rtc_cmos 00:03: registered as rtc0
Feb 13 15:52:22.180053 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T15:52:21 UTC (1739461941)
Feb 13 15:52:22.180150 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Feb 13 15:52:22.180164 kernel: intel_pstate: CPU model not supported
Feb 13 15:52:22.180174 kernel: NET: Registered PF_INET6 protocol family
Feb 13 15:52:22.180183 kernel: Segment Routing with IPv6
Feb 13 15:52:22.180193 kernel: In-situ OAM (IOAM) with IPv6
Feb 13 15:52:22.180201 kernel: NET: Registered PF_PACKET protocol family
Feb 13 15:52:22.180284 kernel: Key type dns_resolver registered
Feb 13 15:52:22.180297 kernel: IPI shorthand broadcast: enabled
Feb 13 15:52:22.180307 kernel: sched_clock: Marking stable (1380167402, 156572070)->(1594347667, -57608195)
Feb 13 15:52:22.180317 kernel: registered taskstats version 1
Feb 13 15:52:22.180335 kernel: Loading compiled-in X.509 certificates
Feb 13 15:52:22.180350 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 3d19ae6dcd850c11d55bf09bd44e00c45ed399eb'
Feb 13 15:52:22.180362 kernel: Key type .fscrypt registered
Feb 13 15:52:22.180374 kernel: Key type fscrypt-provisioning registered
Feb 13 15:52:22.180390 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 13 15:52:22.180407 kernel: ima: Allocated hash algorithm: sha1
Feb 13 15:52:22.180419 kernel: ima: No architecture policies found
Feb 13 15:52:22.180433 kernel: clk: Disabling unused clocks
Feb 13 15:52:22.180460 kernel: Freeing unused kernel image (initmem) memory: 43320K
Feb 13 15:52:22.180474 kernel: Write protecting the kernel read-only data: 38912k
Feb 13 15:52:22.180510 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K
Feb 13 15:52:22.180523 kernel: Run /init as init process
Feb 13 15:52:22.180532 kernel:   with arguments:
Feb 13 15:52:22.180542 kernel:     /init
Feb 13 15:52:22.180553 kernel:   with environment:
Feb 13 15:52:22.180562 kernel:     HOME=/
Feb 13 15:52:22.180572 kernel:     TERM=linux
Feb 13 15:52:22.180581 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 13 15:52:22.180595 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 15:52:22.180609 systemd[1]: Detected virtualization kvm.
Feb 13 15:52:22.180619 systemd[1]: Detected architecture x86-64.
Feb 13 15:52:22.180629 systemd[1]: Running in initrd.
Feb 13 15:52:22.180641 systemd[1]: No hostname configured, using default hostname.
Feb 13 15:52:22.180651 systemd[1]: Hostname set to <localhost>.
Feb 13 15:52:22.180663 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 15:52:22.180673 systemd[1]: Queued start job for default target initrd.target.
Feb 13 15:52:22.180682 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:52:22.180692 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:52:22.180702 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Feb 13 15:52:22.180712 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:52:22.180726 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Feb 13 15:52:22.180736 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Feb 13 15:52:22.180748 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Feb 13 15:52:22.180758 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Feb 13 15:52:22.180767 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:52:22.180777 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:52:22.180786 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:52:22.180799 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:52:22.180811 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:52:22.180823 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:52:22.180833 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:52:22.180843 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:52:22.180855 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 15:52:22.180865 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Feb 13 15:52:22.180876 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:52:22.180885 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:52:22.180895 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:52:22.180905 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:52:22.180919 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Feb 13 15:52:22.180937 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:52:22.180955 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Feb 13 15:52:22.180976 systemd[1]: Starting systemd-fsck-usr.service...
Feb 13 15:52:22.180993 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:52:22.181011 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:52:22.181077 systemd-journald[183]: Collecting audit messages is disabled.
Feb 13 15:52:22.181124 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:52:22.181142 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Feb 13 15:52:22.181159 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:52:22.181180 systemd-journald[183]: Journal started
Feb 13 15:52:22.181250 systemd-journald[183]: Runtime Journal (/run/log/journal/ceb7eebc7abd472f9bc807d8cfd9741e) is 4.9M, max 39.3M, 34.4M free.
Feb 13 15:52:22.185380 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:52:22.186294 systemd[1]: Finished systemd-fsck-usr.service.
Feb 13 15:52:22.198994 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 15:52:22.206907 systemd-modules-load[184]: Inserted module 'overlay'
Feb 13 15:52:22.208593 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:52:22.236270 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 15:52:22.296642 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 13 15:52:22.296702 kernel: Bridge firewalling registered
Feb 13 15:52:22.272308 systemd-modules-load[184]: Inserted module 'br_netfilter'
Feb 13 15:52:22.299409 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:52:22.301359 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:52:22.311791 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:52:22.323436 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:52:22.328982 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:52:22.335337 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:52:22.352691 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:52:22.354153 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:52:22.363588 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Feb 13 15:52:22.374605 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:52:22.377355 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:52:22.383620 dracut-cmdline[216]: dracut-dracut-053
Feb 13 15:52:22.390807 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05
Feb 13 15:52:22.426004 systemd-resolved[217]: Positive Trust Anchors:
Feb 13 15:52:22.426037 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:52:22.426086 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:52:22.430459 systemd-resolved[217]: Defaulting to hostname 'linux'.
Feb 13 15:52:22.432053 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:52:22.437674 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:52:22.538301 kernel: SCSI subsystem initialized
Feb 13 15:52:22.551281 kernel: Loading iSCSI transport class v2.0-870.
Feb 13 15:52:22.568272 kernel: iscsi: registered transport (tcp)
Feb 13 15:52:22.596595 kernel: iscsi: registered transport (qla4xxx)
Feb 13 15:52:22.596704 kernel: QLogic iSCSI HBA Driver
Feb 13 15:52:22.668554 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:52:22.677753 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Feb 13 15:52:22.713947 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 13 15:52:22.714047 kernel: device-mapper: uevent: version 1.0.3
Feb 13 15:52:22.715292 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Feb 13 15:52:22.772313 kernel: raid6: avx2x4   gen() 26608 MB/s
Feb 13 15:52:22.790287 kernel: raid6: avx2x2   gen() 23764 MB/s
Feb 13 15:52:22.808653 kernel: raid6: avx2x1   gen() 13901 MB/s
Feb 13 15:52:22.808785 kernel: raid6: using algorithm avx2x4 gen() 26608 MB/s
Feb 13 15:52:22.827311 kernel: raid6: .... xor() 8723 MB/s, rmw enabled
Feb 13 15:52:22.827416 kernel: raid6: using avx2x2 recovery algorithm
Feb 13 15:52:22.856290 kernel: xor: automatically using best checksumming function   avx       
Feb 13 15:52:23.034274 kernel: Btrfs loaded, zoned=no, fsverity=no
Feb 13 15:52:23.052513 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:52:23.059643 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:52:23.086902 systemd-udevd[401]: Using default interface naming scheme 'v255'.
Feb 13 15:52:23.094259 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:52:23.102632 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Feb 13 15:52:23.130870 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation
Feb 13 15:52:23.178370 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:52:23.185781 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:52:23.252265 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:52:23.260556 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Feb 13 15:52:23.298551 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:52:23.301930 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:52:23.303358 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:52:23.305982 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:52:23.313868 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Feb 13 15:52:23.349994 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:52:23.362247 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues
Feb 13 15:52:23.503015 kernel: scsi host0: Virtio SCSI HBA
Feb 13 15:52:23.503182 kernel: cryptd: max_cpu_qlen set to 1000
Feb 13 15:52:23.503196 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB)
Feb 13 15:52:23.504603 kernel: AVX2 version of gcm_enc/dec engaged.
Feb 13 15:52:23.504636 kernel: AES CTR mode by8 optimization enabled
Feb 13 15:52:23.504648 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb 13 15:52:23.504660 kernel: GPT:9289727 != 125829119
Feb 13 15:52:23.504671 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb 13 15:52:23.504682 kernel: GPT:9289727 != 125829119
Feb 13 15:52:23.504694 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb 13 15:52:23.504706 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb 13 15:52:23.504717 kernel: ACPI: bus type USB registered
Feb 13 15:52:23.504743 kernel: usbcore: registered new interface driver usbfs
Feb 13 15:52:23.504758 kernel: usbcore: registered new interface driver hub
Feb 13 15:52:23.504769 kernel: usbcore: registered new device driver usb
Feb 13 15:52:23.504780 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues
Feb 13 15:52:23.513469 kernel: virtio_blk virtio5: [vdb] 932 512-byte logical blocks (477 kB/466 KiB)
Feb 13 15:52:23.416739 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:52:23.588469 kernel: libata version 3.00 loaded.
Feb 13 15:52:23.588526 kernel: ata_piix 0000:00:01.1: version 2.13
Feb 13 15:52:23.588789 kernel: scsi host1: ata_piix
Feb 13 15:52:23.588931 kernel: scsi host2: ata_piix
Feb 13 15:52:23.589128 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14
Feb 13 15:52:23.589142 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15
Feb 13 15:52:23.416893 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:52:23.417977 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:52:23.598507 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (459)
Feb 13 15:52:23.419464 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:52:23.419665 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:52:23.420581 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:52:23.435946 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:52:23.589657 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:52:23.610270 kernel: BTRFS: device fsid 0e178e67-0100-48b1-87c9-422b9a68652a devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (463)
Feb 13 15:52:23.626939 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM.
Feb 13 15:52:23.648259 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT.
Feb 13 15:52:23.656503 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Feb 13 15:52:23.656862 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Feb 13 15:52:23.657092 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Feb 13 15:52:23.658591 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180
Feb 13 15:52:23.658764 kernel: hub 1-0:1.0: USB hub found
Feb 13 15:52:23.658955 kernel: hub 1-0:1.0: 2 ports detected
Feb 13 15:52:23.664922 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Feb 13 15:52:23.672472 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A.
Feb 13 15:52:23.673833 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132.
Feb 13 15:52:23.680602 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Feb 13 15:52:23.691645 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:52:23.704786 disk-uuid[544]: Primary Header is updated.
Feb 13 15:52:23.704786 disk-uuid[544]: Secondary Entries is updated.
Feb 13 15:52:23.704786 disk-uuid[544]: Secondary Header is updated.
Feb 13 15:52:23.715264 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb 13 15:52:23.715734 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:52:23.725402 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb 13 15:52:24.728302 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb 13 15:52:24.728391 disk-uuid[550]: The operation has completed successfully.
Feb 13 15:52:24.782932 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 13 15:52:24.783048 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Feb 13 15:52:24.796598 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Feb 13 15:52:24.804462 sh[564]: Success
Feb 13 15:52:24.826325 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Feb 13 15:52:24.931624 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Feb 13 15:52:24.934938 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Feb 13 15:52:24.936661 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Feb 13 15:52:24.971431 kernel: BTRFS info (device dm-0): first mount of filesystem 0e178e67-0100-48b1-87c9-422b9a68652a
Feb 13 15:52:24.971529 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:52:24.974091 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Feb 13 15:52:24.974194 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Feb 13 15:52:24.976613 kernel: BTRFS info (device dm-0): using free space tree
Feb 13 15:52:24.990031 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Feb 13 15:52:24.992336 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Feb 13 15:52:24.999569 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Feb 13 15:52:25.003515 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Feb 13 15:52:25.027351 kernel: BTRFS info (device vda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475
Feb 13 15:52:25.027469 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:52:25.027491 kernel: BTRFS info (device vda6): using free space tree
Feb 13 15:52:25.033318 kernel: BTRFS info (device vda6): auto enabling async discard
Feb 13 15:52:25.050067 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 13 15:52:25.052746 kernel: BTRFS info (device vda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475
Feb 13 15:52:25.061656 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Feb 13 15:52:25.070561 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Feb 13 15:52:25.202234 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:52:25.212518 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:52:25.231756 ignition[659]: Ignition 2.20.0
Feb 13 15:52:25.231780 ignition[659]: Stage: fetch-offline
Feb 13 15:52:25.234541 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:52:25.231861 ignition[659]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:52:25.231878 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Feb 13 15:52:25.232084 ignition[659]: parsed url from cmdline: ""
Feb 13 15:52:25.232090 ignition[659]: no config URL provided
Feb 13 15:52:25.232099 ignition[659]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 15:52:25.232113 ignition[659]: no config at "/usr/lib/ignition/user.ign"
Feb 13 15:52:25.232121 ignition[659]: failed to fetch config: resource requires networking
Feb 13 15:52:25.232625 ignition[659]: Ignition finished successfully
Feb 13 15:52:25.248861 systemd-networkd[753]: lo: Link UP
Feb 13 15:52:25.248885 systemd-networkd[753]: lo: Gained carrier
Feb 13 15:52:25.252581 systemd-networkd[753]: Enumeration completed
Feb 13 15:52:25.252751 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:52:25.254160 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name.
Feb 13 15:52:25.254169 systemd-networkd[753]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network.
Feb 13 15:52:25.255439 systemd[1]: Reached target network.target - Network.
Feb 13 15:52:25.255689 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:52:25.255693 systemd-networkd[753]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 15:52:25.256756 systemd-networkd[753]: eth0: Link UP
Feb 13 15:52:25.259272 systemd-networkd[753]: eth0: Gained carrier
Feb 13 15:52:25.259300 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name.
Feb 13 15:52:25.263842 systemd-networkd[753]: eth1: Link UP
Feb 13 15:52:25.263846 systemd-networkd[753]: eth1: Gained carrier
Feb 13 15:52:25.263862 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:52:25.265321 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Feb 13 15:52:25.280342 systemd-networkd[753]: eth0: DHCPv4 address 143.110.144.28/20, gateway 143.110.144.1 acquired from 169.254.169.253
Feb 13 15:52:25.284359 systemd-networkd[753]: eth1: DHCPv4 address 10.124.0.5/20 acquired from 169.254.169.253
Feb 13 15:52:25.296075 ignition[757]: Ignition 2.20.0
Feb 13 15:52:25.296091 ignition[757]: Stage: fetch
Feb 13 15:52:25.296374 ignition[757]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:52:25.296386 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Feb 13 15:52:25.296486 ignition[757]: parsed url from cmdline: ""
Feb 13 15:52:25.296489 ignition[757]: no config URL provided
Feb 13 15:52:25.296494 ignition[757]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 15:52:25.296505 ignition[757]: no config at "/usr/lib/ignition/user.ign"
Feb 13 15:52:25.296530 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1
Feb 13 15:52:25.314094 ignition[757]: GET result: OK
Feb 13 15:52:25.314251 ignition[757]: parsing config with SHA512: f338e9862dea3b455d8fe982c0dda2bc3db0c1977c9c353760e9f1b7a66800e32b262953b53a726b0ac9ce27d7bc3a06e818511e6ce3f895d5a8032b09b6d6f4
Feb 13 15:52:25.319590 unknown[757]: fetched base config from "system"
Feb 13 15:52:25.319603 unknown[757]: fetched base config from "system"
Feb 13 15:52:25.320088 ignition[757]: fetch: fetch complete
Feb 13 15:52:25.319610 unknown[757]: fetched user config from "digitalocean"
Feb 13 15:52:25.320094 ignition[757]: fetch: fetch passed
Feb 13 15:52:25.323588 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Feb 13 15:52:25.320165 ignition[757]: Ignition finished successfully
Feb 13 15:52:25.329695 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Feb 13 15:52:25.370113 ignition[765]: Ignition 2.20.0
Feb 13 15:52:25.370134 ignition[765]: Stage: kargs
Feb 13 15:52:25.370505 ignition[765]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:52:25.370522 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Feb 13 15:52:25.372549 ignition[765]: kargs: kargs passed
Feb 13 15:52:25.372617 ignition[765]: Ignition finished successfully
Feb 13 15:52:25.377531 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Feb 13 15:52:25.387508 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Feb 13 15:52:25.408529 ignition[771]: Ignition 2.20.0
Feb 13 15:52:25.408542 ignition[771]: Stage: disks
Feb 13 15:52:25.408764 ignition[771]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:52:25.408774 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Feb 13 15:52:25.411875 ignition[771]: disks: disks passed
Feb 13 15:52:25.411977 ignition[771]: Ignition finished successfully
Feb 13 15:52:25.414959 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Feb 13 15:52:25.421602 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Feb 13 15:52:25.423759 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 15:52:25.424399 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:52:25.426130 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:52:25.426765 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:52:25.436529 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Feb 13 15:52:25.459310 systemd-fsck[780]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Feb 13 15:52:25.464616 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Feb 13 15:52:25.473503 systemd[1]: Mounting sysroot.mount - /sysroot...
Feb 13 15:52:25.616251 kernel: EXT4-fs (vda9): mounted filesystem e45e00fd-a630-4f0f-91bb-bc879e42a47e r/w with ordered data mode. Quota mode: none.
Feb 13 15:52:25.616885 systemd[1]: Mounted sysroot.mount - /sysroot.
Feb 13 15:52:25.618361 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Feb 13 15:52:25.625610 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:52:25.628367 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Feb 13 15:52:25.632249 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service...
Feb 13 15:52:25.638099 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent...
Feb 13 15:52:25.640321 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 13 15:52:25.641847 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:52:25.650723 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (788)
Feb 13 15:52:25.650798 kernel: BTRFS info (device vda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475
Feb 13 15:52:25.654522 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:52:25.654619 kernel: BTRFS info (device vda6): using free space tree
Feb 13 15:52:25.660028 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Feb 13 15:52:25.671533 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Feb 13 15:52:25.677538 kernel: BTRFS info (device vda6): auto enabling async discard
Feb 13 15:52:25.683146 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:52:25.766414 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory
Feb 13 15:52:25.776578 coreos-metadata[790]: Feb 13 15:52:25.776 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1
Feb 13 15:52:25.779714 coreos-metadata[791]: Feb 13 15:52:25.779 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1
Feb 13 15:52:25.785635 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory
Feb 13 15:52:25.793256 coreos-metadata[791]: Feb 13 15:52:25.790 INFO Fetch successful
Feb 13 15:52:25.794123 coreos-metadata[790]: Feb 13 15:52:25.792 INFO Fetch successful
Feb 13 15:52:25.797556 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully.
Feb 13 15:52:25.801024 coreos-metadata[791]: Feb 13 15:52:25.799 INFO wrote hostname ci-4186.1.1-b-9620dd7e41 to /sysroot/etc/hostname
Feb 13 15:52:25.800269 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service.
Feb 13 15:52:25.804078 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory
Feb 13 15:52:25.804469 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Feb 13 15:52:25.813725 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 13 15:52:25.961095 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Feb 13 15:52:25.967492 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Feb 13 15:52:25.970609 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Feb 13 15:52:25.988058 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Feb 13 15:52:25.990047 kernel: BTRFS info (device vda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475
Feb 13 15:52:26.046582 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Feb 13 15:52:26.050464 ignition[908]: INFO     : Ignition 2.20.0
Feb 13 15:52:26.050464 ignition[908]: INFO     : Stage: mount
Feb 13 15:52:26.050464 ignition[908]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:52:26.050464 ignition[908]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Feb 13 15:52:26.056491 ignition[908]: INFO     : mount: mount passed
Feb 13 15:52:26.056491 ignition[908]: INFO     : Ignition finished successfully
Feb 13 15:52:26.052996 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Feb 13 15:52:26.066604 systemd[1]: Starting ignition-files.service - Ignition (files)...
Feb 13 15:52:26.086640 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:52:26.104279 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (920)
Feb 13 15:52:26.104410 kernel: BTRFS info (device vda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475
Feb 13 15:52:26.110539 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:52:26.110668 kernel: BTRFS info (device vda6): using free space tree
Feb 13 15:52:26.117664 kernel: BTRFS info (device vda6): auto enabling async discard
Feb 13 15:52:26.122588 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:52:26.162229 ignition[937]: INFO     : Ignition 2.20.0
Feb 13 15:52:26.162229 ignition[937]: INFO     : Stage: files
Feb 13 15:52:26.164673 ignition[937]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:52:26.164673 ignition[937]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Feb 13 15:52:26.164673 ignition[937]: DEBUG    : files: compiled without relabeling support, skipping
Feb 13 15:52:26.168013 ignition[937]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 13 15:52:26.168013 ignition[937]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 13 15:52:26.170783 ignition[937]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 13 15:52:26.172088 ignition[937]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 13 15:52:26.174418 ignition[937]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 13 15:52:26.172715 unknown[937]: wrote ssh authorized keys file for user: core
Feb 13 15:52:26.176926 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/home/core/install.sh"
Feb 13 15:52:26.178698 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh"
Feb 13 15:52:26.178698 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:52:26.178698 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:52:26.178698 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Feb 13 15:52:26.178698 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Feb 13 15:52:26.178698 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Feb 13 15:52:26.178698 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1
Feb 13 15:52:26.557493 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET result: OK
Feb 13 15:52:26.599540 systemd-networkd[753]: eth0: Gained IPv6LL
Feb 13 15:52:26.920768 systemd-networkd[753]: eth1: Gained IPv6LL
Feb 13 15:52:26.934778 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Feb 13 15:52:26.936467 ignition[937]: INFO     : files: createResultFile: createFiles: op(7): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:52:26.936467 ignition[937]: INFO     : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:52:26.936467 ignition[937]: INFO     : files: files passed
Feb 13 15:52:26.936467 ignition[937]: INFO     : Ignition finished successfully
Feb 13 15:52:26.938392 systemd[1]: Finished ignition-files.service - Ignition (files).
Feb 13 15:52:26.958134 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Feb 13 15:52:26.974759 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Feb 13 15:52:26.976373 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 13 15:52:26.976564 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Feb 13 15:52:27.006287 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:52:27.006287 initrd-setup-root-after-ignition[965]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:52:27.010836 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:52:27.011746 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:52:27.013844 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Feb 13 15:52:27.028721 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Feb 13 15:52:27.087190 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 13 15:52:27.087993 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Feb 13 15:52:27.089503 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Feb 13 15:52:27.090520 systemd[1]: Reached target initrd.target - Initrd Default Target.
Feb 13 15:52:27.091313 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Feb 13 15:52:27.110807 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Feb 13 15:52:27.142001 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:52:27.151667 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Feb 13 15:52:27.185647 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:52:27.186611 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:52:27.188742 systemd[1]: Stopped target timers.target - Timer Units.
Feb 13 15:52:27.190647 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 13 15:52:27.190934 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:52:27.192555 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Feb 13 15:52:27.194443 systemd[1]: Stopped target basic.target - Basic System.
Feb 13 15:52:27.196671 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Feb 13 15:52:27.198934 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:52:27.200259 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Feb 13 15:52:27.202222 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Feb 13 15:52:27.203646 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:52:27.205038 systemd[1]: Stopped target sysinit.target - System Initialization.
Feb 13 15:52:27.206874 systemd[1]: Stopped target local-fs.target - Local File Systems.
Feb 13 15:52:27.208814 systemd[1]: Stopped target swap.target - Swaps.
Feb 13 15:52:27.210471 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 13 15:52:27.210714 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:52:27.214146 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:52:27.216982 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:52:27.219275 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Feb 13 15:52:27.220034 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:52:27.221944 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 13 15:52:27.222201 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:52:27.227847 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 13 15:52:27.228255 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:52:27.230392 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 13 15:52:27.230673 systemd[1]: Stopped ignition-files.service - Ignition (files).
Feb 13 15:52:27.233732 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully.
Feb 13 15:52:27.234057 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Feb 13 15:52:27.244015 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Feb 13 15:52:27.265825 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Feb 13 15:52:27.268798 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 13 15:52:27.269327 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:52:27.272575 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 13 15:52:27.272788 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:52:27.287882 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 13 15:52:27.288090 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Feb 13 15:52:27.312804 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb 13 15:52:27.348965 ignition[989]: INFO     : Ignition 2.20.0
Feb 13 15:52:27.348965 ignition[989]: INFO     : Stage: umount
Feb 13 15:52:27.377083 ignition[989]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:52:27.377083 ignition[989]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Feb 13 15:52:27.377083 ignition[989]: INFO     : umount: umount passed
Feb 13 15:52:27.377083 ignition[989]: INFO     : Ignition finished successfully
Feb 13 15:52:27.376243 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 13 15:52:27.376447 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Feb 13 15:52:27.379001 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 13 15:52:27.379170 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Feb 13 15:52:27.381726 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 13 15:52:27.381860 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Feb 13 15:52:27.382743 systemd[1]: ignition-fetch.service: Deactivated successfully.
Feb 13 15:52:27.382857 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Feb 13 15:52:27.394675 systemd[1]: Stopped target network.target - Network.
Feb 13 15:52:27.396046 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 13 15:52:27.396177 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:52:27.398270 systemd[1]: Stopped target paths.target - Path Units.
Feb 13 15:52:27.400143 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 13 15:52:27.400550 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:52:27.401821 systemd[1]: Stopped target slices.target - Slice Units.
Feb 13 15:52:27.403195 systemd[1]: Stopped target sockets.target - Socket Units.
Feb 13 15:52:27.405076 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 13 15:52:27.405393 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:52:27.409020 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 13 15:52:27.409358 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:52:27.410962 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 13 15:52:27.411096 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Feb 13 15:52:27.412765 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Feb 13 15:52:27.412875 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Feb 13 15:52:27.415157 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Feb 13 15:52:27.416560 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Feb 13 15:52:27.424076 systemd-networkd[753]: eth0: DHCPv6 lease lost
Feb 13 15:52:27.430337 systemd-networkd[753]: eth1: DHCPv6 lease lost
Feb 13 15:52:27.432828 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 13 15:52:27.433097 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Feb 13 15:52:27.443888 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 13 15:52:27.444052 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Feb 13 15:52:27.449435 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 13 15:52:27.449750 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Feb 13 15:52:27.453845 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 13 15:52:27.453961 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:52:27.455642 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 13 15:52:27.455763 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Feb 13 15:52:27.472790 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Feb 13 15:52:27.474575 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 13 15:52:27.474716 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:52:27.475652 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 15:52:27.475761 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:52:27.476683 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 13 15:52:27.476756 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:52:27.479245 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 13 15:52:27.479359 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:52:27.481802 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:52:27.510724 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 13 15:52:27.512171 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:52:27.514235 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 13 15:52:27.514314 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:52:27.515021 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 13 15:52:27.515082 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:52:27.517115 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 13 15:52:27.517438 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:52:27.520048 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 13 15:52:27.520183 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:52:27.521690 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:52:27.521817 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:52:27.536648 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Feb 13 15:52:27.538782 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 13 15:52:27.538946 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:52:27.542186 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully.
Feb 13 15:52:27.542338 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 15:52:27.545727 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 13 15:52:27.545847 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:52:27.551294 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:52:27.551435 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:52:27.552781 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 13 15:52:27.552998 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Feb 13 15:52:27.567368 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 13 15:52:27.567598 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Feb 13 15:52:27.570521 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Feb 13 15:52:27.579652 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Feb 13 15:52:27.605200 systemd[1]: Switching root.
Feb 13 15:52:27.710968 systemd-journald[183]: Journal stopped
Feb 13 15:52:29.549731 systemd-journald[183]: Received SIGTERM from PID 1 (systemd).
Feb 13 15:52:29.549836 kernel: SELinux:  policy capability network_peer_controls=1
Feb 13 15:52:29.549863 kernel: SELinux:  policy capability open_perms=1
Feb 13 15:52:29.549882 kernel: SELinux:  policy capability extended_socket_class=1
Feb 13 15:52:29.549900 kernel: SELinux:  policy capability always_check_network=0
Feb 13 15:52:29.549919 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 13 15:52:29.549961 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 13 15:52:29.549989 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb 13 15:52:29.550010 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb 13 15:52:29.550032 kernel: audit: type=1403 audit(1739461947.959:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 13 15:52:29.550068 systemd[1]: Successfully loaded SELinux policy in 64.527ms.
Feb 13 15:52:29.550100 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.219ms.
Feb 13 15:52:29.550132 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 15:52:29.550155 systemd[1]: Detected virtualization kvm.
Feb 13 15:52:29.550181 systemd[1]: Detected architecture x86-64.
Feb 13 15:52:29.550279 systemd[1]: Detected first boot.
Feb 13 15:52:29.550303 systemd[1]: Hostname set to <ci-4186.1.1-b-9620dd7e41>.
Feb 13 15:52:29.550325 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 15:52:29.550346 zram_generator::config[1031]: No configuration found.
Feb 13 15:52:29.550370 systemd[1]: Populated /etc with preset unit settings.
Feb 13 15:52:29.550392 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 13 15:52:29.550414 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Feb 13 15:52:29.550442 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 13 15:52:29.550467 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Feb 13 15:52:29.550489 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Feb 13 15:52:29.550511 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Feb 13 15:52:29.550533 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Feb 13 15:52:29.550557 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Feb 13 15:52:29.550575 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Feb 13 15:52:29.550598 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Feb 13 15:52:29.550626 systemd[1]: Created slice user.slice - User and Session Slice.
Feb 13 15:52:29.550645 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:52:29.550663 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:52:29.550682 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Feb 13 15:52:29.550699 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Feb 13 15:52:29.550718 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Feb 13 15:52:29.550736 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:52:29.550753 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0...
Feb 13 15:52:29.550775 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:52:29.550799 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Feb 13 15:52:29.550822 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Feb 13 15:52:29.550847 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Feb 13 15:52:29.550870 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Feb 13 15:52:29.550894 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:52:29.550919 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:52:29.550942 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:52:29.550970 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:52:29.550993 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Feb 13 15:52:29.551016 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Feb 13 15:52:29.551039 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:52:29.551063 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:52:29.551081 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:52:29.551101 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Feb 13 15:52:29.551121 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Feb 13 15:52:29.551142 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Feb 13 15:52:29.551169 systemd[1]: Mounting media.mount - External Media Directory...
Feb 13 15:52:29.551189 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:52:29.551230 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Feb 13 15:52:29.551272 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Feb 13 15:52:29.551294 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Feb 13 15:52:29.551315 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 13 15:52:29.551340 systemd[1]: Reached target machines.target - Containers.
Feb 13 15:52:29.551360 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Feb 13 15:52:29.551386 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:52:29.551409 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:52:29.551431 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Feb 13 15:52:29.551453 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:52:29.551474 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 15:52:29.551497 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:52:29.551519 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Feb 13 15:52:29.551540 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:52:29.551563 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb 13 15:52:29.551589 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 13 15:52:29.551612 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Feb 13 15:52:29.553297 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Feb 13 15:52:29.553367 systemd[1]: Stopped systemd-fsck-usr.service.
Feb 13 15:52:29.553389 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:52:29.553412 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:52:29.553430 kernel: loop: module loaded
Feb 13 15:52:29.553453 kernel: fuse: init (API version 7.39)
Feb 13 15:52:29.553472 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Feb 13 15:52:29.553505 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Feb 13 15:52:29.553529 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:52:29.553551 systemd[1]: verity-setup.service: Deactivated successfully.
Feb 13 15:52:29.553574 systemd[1]: Stopped verity-setup.service.
Feb 13 15:52:29.553599 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:52:29.553621 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Feb 13 15:52:29.553642 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Feb 13 15:52:29.553666 systemd[1]: Mounted media.mount - External Media Directory.
Feb 13 15:52:29.553693 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Feb 13 15:52:29.553715 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Feb 13 15:52:29.553737 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Feb 13 15:52:29.553760 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:52:29.553786 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 13 15:52:29.553809 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Feb 13 15:52:29.553860 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:52:29.553881 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:52:29.553899 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:52:29.553918 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:52:29.553942 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 13 15:52:29.553960 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Feb 13 15:52:29.553983 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:52:29.554005 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:52:29.554026 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:52:29.554050 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Feb 13 15:52:29.554072 kernel: ACPI: bus type drm_connector registered
Feb 13 15:52:29.554155 systemd-journald[1107]: Collecting audit messages is disabled.
Feb 13 15:52:29.554220 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Feb 13 15:52:29.554260 systemd[1]: Reached target network-pre.target - Preparation for Network.
Feb 13 15:52:29.554283 systemd-journald[1107]: Journal started
Feb 13 15:52:29.554329 systemd-journald[1107]: Runtime Journal (/run/log/journal/ceb7eebc7abd472f9bc807d8cfd9741e) is 4.9M, max 39.3M, 34.4M free.
Feb 13 15:52:29.002291 systemd[1]: Queued start job for default target multi-user.target.
Feb 13 15:52:29.056197 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6.
Feb 13 15:52:29.058108 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 13 15:52:29.566339 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Feb 13 15:52:29.576244 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Feb 13 15:52:29.586283 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb 13 15:52:29.590261 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:52:29.597289 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Feb 13 15:52:29.608258 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Feb 13 15:52:29.623594 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Feb 13 15:52:29.623716 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:52:29.636343 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Feb 13 15:52:29.639259 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:52:29.653267 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Feb 13 15:52:29.664999 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:52:29.669321 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:52:29.678264 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Feb 13 15:52:29.708324 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 15:52:29.715266 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:52:29.733731 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Feb 13 15:52:29.735868 kernel: loop0: detected capacity change from 0 to 211296
Feb 13 15:52:29.739714 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 15:52:29.740718 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 15:52:29.766659 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Feb 13 15:52:29.767552 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Feb 13 15:52:29.768486 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Feb 13 15:52:29.796424 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Feb 13 15:52:29.803893 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:52:29.847278 kernel: loop1: detected capacity change from 0 to 8
Feb 13 15:52:29.859331 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Feb 13 15:52:29.863376 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:52:29.879556 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Feb 13 15:52:29.888613 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Feb 13 15:52:29.902506 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Feb 13 15:52:29.910287 kernel: loop2: detected capacity change from 0 to 138184
Feb 13 15:52:29.913576 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Feb 13 15:52:29.965513 systemd-journald[1107]: Time spent on flushing to /var/log/journal/ceb7eebc7abd472f9bc807d8cfd9741e is 86.571ms for 982 entries.
Feb 13 15:52:29.965513 systemd-journald[1107]: System Journal (/var/log/journal/ceb7eebc7abd472f9bc807d8cfd9741e) is 8.0M, max 195.6M, 187.6M free.
Feb 13 15:52:30.109421 systemd-journald[1107]: Received client request to flush runtime journal.
Feb 13 15:52:30.109557 kernel: loop3: detected capacity change from 0 to 141000
Feb 13 15:52:29.969428 systemd-tmpfiles[1134]: ACLs are not supported, ignoring.
Feb 13 15:52:29.969447 systemd-tmpfiles[1134]: ACLs are not supported, ignoring.
Feb 13 15:52:29.974672 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb 13 15:52:29.975841 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Feb 13 15:52:30.002483 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Feb 13 15:52:30.009745 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 15:52:30.022549 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Feb 13 15:52:30.112304 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Feb 13 15:52:30.126682 kernel: loop4: detected capacity change from 0 to 211296
Feb 13 15:52:30.184837 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Feb 13 15:52:30.200651 kernel: loop5: detected capacity change from 0 to 8
Feb 13 15:52:30.198270 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:52:30.215295 kernel: loop6: detected capacity change from 0 to 138184
Feb 13 15:52:30.284261 kernel: loop7: detected capacity change from 0 to 141000
Feb 13 15:52:30.322326 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'.
Feb 13 15:52:30.327943 systemd-tmpfiles[1179]: ACLs are not supported, ignoring.
Feb 13 15:52:30.328557 systemd-tmpfiles[1179]: ACLs are not supported, ignoring.
Feb 13 15:52:30.330532 (sd-merge)[1176]: Merged extensions into '/usr'.
Feb 13 15:52:30.343714 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:52:30.345879 systemd[1]: Reloading requested from client PID 1133 ('systemd-sysext') (unit systemd-sysext.service)...
Feb 13 15:52:30.345906 systemd[1]: Reloading...
Feb 13 15:52:30.574248 zram_generator::config[1207]: No configuration found.
Feb 13 15:52:30.847462 ldconfig[1129]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb 13 15:52:30.853420 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:52:30.935723 systemd[1]: Reloading finished in 588 ms.
Feb 13 15:52:30.981100 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Feb 13 15:52:30.987901 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Feb 13 15:52:30.997850 systemd[1]: Starting ensure-sysext.service...
Feb 13 15:52:31.009755 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:52:31.035058 systemd[1]: Reloading requested from client PID 1250 ('systemctl') (unit ensure-sysext.service)...
Feb 13 15:52:31.035091 systemd[1]: Reloading...
Feb 13 15:52:31.067437 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb 13 15:52:31.069955 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Feb 13 15:52:31.071840 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb 13 15:52:31.072608 systemd-tmpfiles[1251]: ACLs are not supported, ignoring.
Feb 13 15:52:31.072850 systemd-tmpfiles[1251]: ACLs are not supported, ignoring.
Feb 13 15:52:31.085399 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 15:52:31.085656 systemd-tmpfiles[1251]: Skipping /boot
Feb 13 15:52:31.112054 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 15:52:31.112388 systemd-tmpfiles[1251]: Skipping /boot
Feb 13 15:52:31.228274 zram_generator::config[1274]: No configuration found.
Feb 13 15:52:31.455612 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:52:31.546956 systemd[1]: Reloading finished in 510 ms.
Feb 13 15:52:31.569907 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Feb 13 15:52:31.578005 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:52:31.594744 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:52:31.602951 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Feb 13 15:52:31.608756 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Feb 13 15:52:31.619773 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:52:31.623656 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:52:31.629793 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Feb 13 15:52:31.636153 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:52:31.637367 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:52:31.646420 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:52:31.659762 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:52:31.666571 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:52:31.668044 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:52:31.668754 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:52:31.674114 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:52:31.674521 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:52:31.674799 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:52:31.685773 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Feb 13 15:52:31.687515 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:52:31.696673 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:52:31.697030 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:52:31.710676 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 15:52:31.712667 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:52:31.712953 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:52:31.718327 systemd[1]: Finished ensure-sysext.service.
Feb 13 15:52:31.723900 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Feb 13 15:52:31.733342 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Feb 13 15:52:31.735004 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:52:31.735955 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:52:31.763482 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Feb 13 15:52:31.768668 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Feb 13 15:52:31.771452 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 13 15:52:31.772634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:52:31.772947 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:52:31.774712 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:52:31.776352 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:52:31.778104 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 15:52:31.778571 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 15:52:31.787329 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Feb 13 15:52:31.791592 systemd-udevd[1327]: Using default interface naming scheme 'v255'.
Feb 13 15:52:31.796647 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:52:31.796797 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:52:31.834026 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Feb 13 15:52:31.856710 augenrules[1364]: No rules
Feb 13 15:52:31.858110 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:52:31.858526 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:52:31.859536 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:52:31.875667 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:52:31.886346 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Feb 13 15:52:32.004972 systemd-resolved[1326]: Positive Trust Anchors:
Feb 13 15:52:32.005003 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:52:32.005050 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:52:32.014558 systemd-resolved[1326]: Using system hostname 'ci-4186.1.1-b-9620dd7e41'.
Feb 13 15:52:32.017354 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:52:32.018155 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:52:32.133943 systemd-networkd[1374]: lo: Link UP
Feb 13 15:52:32.134474 systemd-networkd[1374]: lo: Gained carrier
Feb 13 15:52:32.135655 systemd-networkd[1374]: Enumeration completed
Feb 13 15:52:32.135840 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:52:32.137889 systemd[1]: Reached target network.target - Network.
Feb 13 15:52:32.145573 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Feb 13 15:52:32.155598 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Feb 13 15:52:32.157512 systemd[1]: Reached target time-set.target - System Time Set.
Feb 13 15:52:32.186454 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped.
Feb 13 15:52:32.199250 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1389)
Feb 13 15:52:32.222498 systemd[1]: Mounting media-configdrive.mount - /media/configdrive...
Feb 13 15:52:32.224349 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:52:32.224536 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:52:32.234578 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:52:32.242444 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:52:32.246510 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:52:32.248898 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:52:32.249393 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 13 15:52:32.249420 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:52:32.250155 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:52:32.252325 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:52:32.262449 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Feb 13 15:52:32.264608 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:52:32.265485 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:52:32.284047 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Feb 13 15:52:32.284855 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:52:32.303251 kernel: ISO 9660 Extensions: RRIP_1991A
Feb 13 15:52:32.308908 systemd[1]: Mounted media-configdrive.mount - /media/configdrive.
Feb 13 15:52:32.315625 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:52:32.315859 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:52:32.317037 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:52:32.333026 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Feb 13 15:52:32.356992 systemd-networkd[1374]: eth1: Configuring with /run/systemd/network/10-92:4d:12:a3:ff:41.network.
Feb 13 15:52:32.359174 systemd-networkd[1374]: eth1: Link UP
Feb 13 15:52:32.359189 systemd-networkd[1374]: eth1: Gained carrier
Feb 13 15:52:32.362528 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection.
Feb 13 15:52:32.379074 systemd-networkd[1374]: eth0: Configuring with /run/systemd/network/10-3a:96:3c:92:9c:3d.network.
Feb 13 15:52:32.380914 systemd-networkd[1374]: eth0: Link UP
Feb 13 15:52:32.380934 systemd-networkd[1374]: eth0: Gained carrier
Feb 13 15:52:32.401258 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
Feb 13 15:52:32.418570 kernel: ACPI: button: Power Button [PWRF]
Feb 13 15:52:32.445348 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Feb 13 15:52:32.472259 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
Feb 13 15:52:32.513662 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:52:32.525300 kernel: mousedev: PS/2 mouse device common for all mice
Feb 13 15:52:32.589294 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Feb 13 15:52:32.595603 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Feb 13 15:52:32.604380 kernel: Console: switching to colour dummy device 80x25
Feb 13 15:52:32.604481 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Feb 13 15:52:32.604499 kernel: [drm] features: -context_init
Feb 13 15:52:32.608238 kernel: [drm] number of scanouts: 1
Feb 13 15:52:32.608390 kernel: [drm] number of cap sets: 0
Feb 13 15:52:32.628630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:52:32.628837 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:52:32.630282 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0
Feb 13 15:52:32.638265 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Feb 13 15:52:32.638376 kernel: Console: switching to colour frame buffer device 128x48
Feb 13 15:52:32.642334 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:52:32.645259 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Feb 13 15:52:32.670650 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:52:32.670966 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:52:32.732777 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:52:32.793386 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:52:32.806304 kernel: EDAC MC: Ver: 3.0.0
Feb 13 15:52:32.836833 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Feb 13 15:52:32.843641 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Feb 13 15:52:32.871362 lvm[1438]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 15:52:32.908313 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Feb 13 15:52:32.909794 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:52:32.910669 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:52:32.911027 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Feb 13 15:52:32.911234 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Feb 13 15:52:32.911647 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Feb 13 15:52:32.911910 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Feb 13 15:52:32.912020 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Feb 13 15:52:32.912115 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb 13 15:52:32.912148 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:52:32.912202 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:52:32.916268 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Feb 13 15:52:32.919104 systemd[1]: Starting docker.socket - Docker Socket for the API...
Feb 13 15:52:32.928905 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Feb 13 15:52:32.932543 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Feb 13 15:52:32.934936 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Feb 13 15:52:32.948650 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:52:32.950378 lvm[1442]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 15:52:32.953473 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:52:32.961003 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Feb 13 15:52:32.961078 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Feb 13 15:52:32.969553 systemd[1]: Starting containerd.service - containerd container runtime...
Feb 13 15:52:32.978894 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Feb 13 15:52:32.991528 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Feb 13 15:52:33.000464 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Feb 13 15:52:33.054277 jq[1448]: false
Feb 13 15:52:33.059861 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Feb 13 15:52:33.061958 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Feb 13 15:52:33.070386 coreos-metadata[1444]: Feb 13 15:52:33.069 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1
Feb 13 15:52:33.078420 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Feb 13 15:52:33.084532 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Feb 13 15:52:33.088577 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Feb 13 15:52:33.097933 coreos-metadata[1444]: Feb 13 15:52:33.097 INFO Fetch successful
Feb 13 15:52:33.100083 systemd[1]: Starting systemd-logind.service - User Login Management...
Feb 13 15:52:33.105550 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb 13 15:52:33.106698 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Feb 13 15:52:33.116581 systemd[1]: Starting update-engine.service - Update Engine...
Feb 13 15:52:33.126016 dbus-daemon[1445]: [system] SELinux support is enabled
Feb 13 15:52:33.129406 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Feb 13 15:52:33.131633 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Feb 13 15:52:33.142065 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Feb 13 15:52:33.156863 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb 13 15:52:33.158393 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Feb 13 15:52:33.158907 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb 13 15:52:33.159147 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Feb 13 15:52:33.161654 extend-filesystems[1449]: Found loop4
Feb 13 15:52:33.165066 extend-filesystems[1449]: Found loop5
Feb 13 15:52:33.165066 extend-filesystems[1449]: Found loop6
Feb 13 15:52:33.165066 extend-filesystems[1449]: Found loop7
Feb 13 15:52:33.165066 extend-filesystems[1449]: Found vda
Feb 13 15:52:33.165066 extend-filesystems[1449]: Found vda1
Feb 13 15:52:33.165066 extend-filesystems[1449]: Found vda2
Feb 13 15:52:33.165066 extend-filesystems[1449]: Found vda3
Feb 13 15:52:33.165066 extend-filesystems[1449]: Found usr
Feb 13 15:52:33.165066 extend-filesystems[1449]: Found vda4
Feb 13 15:52:33.165066 extend-filesystems[1449]: Found vda6
Feb 13 15:52:33.165066 extend-filesystems[1449]: Found vda7
Feb 13 15:52:33.165066 extend-filesystems[1449]: Found vda9
Feb 13 15:52:33.165066 extend-filesystems[1449]: Checking size of /dev/vda9
Feb 13 15:52:33.201092 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Feb 13 15:52:33.203466 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb 13 15:52:33.203519 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Feb 13 15:52:33.204154 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb 13 15:52:33.207993 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean).
Feb 13 15:52:33.208068 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Feb 13 15:52:33.223274 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1389)
Feb 13 15:52:33.250498 extend-filesystems[1449]: Resized partition /dev/vda9
Feb 13 15:52:33.270351 extend-filesystems[1478]: resize2fs 1.47.1 (20-May-2024)
Feb 13 15:52:33.272728 systemd[1]: motdgen.service: Deactivated successfully.
Feb 13 15:52:33.273033 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Feb 13 15:52:33.285294 jq[1457]: true
Feb 13 15:52:33.292248 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks
Feb 13 15:52:33.294887 (ntainerd)[1477]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Feb 13 15:52:33.302002 update_engine[1454]: I20250213 15:52:33.300752  1454 main.cc:92] Flatcar Update Engine starting
Feb 13 15:52:33.307451 systemd[1]: Started update-engine.service - Update Engine.
Feb 13 15:52:33.308619 update_engine[1454]: I20250213 15:52:33.307485  1454 update_check_scheduler.cc:74] Next update check in 4m0s
Feb 13 15:52:33.319530 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Feb 13 15:52:33.402369 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Feb 13 15:52:33.425885 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Feb 13 15:52:33.447925 jq[1481]: true
Feb 13 15:52:33.552699 kernel: EXT4-fs (vda9): resized filesystem to 15121403
Feb 13 15:52:33.566417 systemd-logind[1453]: New seat seat0.
Feb 13 15:52:33.583509 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb 13 15:52:33.585506 systemd-networkd[1374]: eth0: Gained IPv6LL
Feb 13 15:52:33.595726 systemd-logind[1453]: Watching system buttons on /dev/input/event1 (Power Button)
Feb 13 15:52:33.595758 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Feb 13 15:52:33.596505 systemd[1]: Started systemd-logind.service - User Login Management.
Feb 13 15:52:33.600711 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Feb 13 15:52:33.606177 systemd[1]: Reached target network-online.target - Network is Online.
Feb 13 15:52:33.610153 extend-filesystems[1478]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Feb 13 15:52:33.610153 extend-filesystems[1478]: old_desc_blocks = 1, new_desc_blocks = 8
Feb 13 15:52:33.610153 extend-filesystems[1478]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long.
Feb 13 15:52:33.621875 extend-filesystems[1449]: Resized filesystem in /dev/vda9
Feb 13 15:52:33.621875 extend-filesystems[1449]: Found vdb
Feb 13 15:52:33.623677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:52:33.639646 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Feb 13 15:52:33.642912 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb 13 15:52:33.645399 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Feb 13 15:52:33.689577 bash[1513]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 15:52:33.699162 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Feb 13 15:52:33.714692 systemd[1]: Starting sshkeys.service...
Feb 13 15:52:33.732746 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Feb 13 15:52:33.772502 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys.
Feb 13 15:52:33.786779 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)...
Feb 13 15:52:33.879369 coreos-metadata[1529]: Feb 13 15:52:33.878 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1
Feb 13 15:52:33.894783 coreos-metadata[1529]: Feb 13 15:52:33.894 INFO Fetch successful
Feb 13 15:52:33.917589 unknown[1529]: wrote ssh authorized keys file for user: core
Feb 13 15:52:33.941289 containerd[1477]: time="2025-02-13T15:52:33.938990859Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23
Feb 13 15:52:33.978301 update-ssh-keys[1533]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 15:52:33.983552 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys).
Feb 13 15:52:33.989941 systemd[1]: Finished sshkeys.service.
Feb 13 15:52:33.994516 containerd[1477]: time="2025-02-13T15:52:33.994413883Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:52:33.997379 containerd[1477]: time="2025-02-13T15:52:33.997302478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:52:33.997556 containerd[1477]: time="2025-02-13T15:52:33.997537939Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb 13 15:52:33.997630 containerd[1477]: time="2025-02-13T15:52:33.997614433Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb 13 15:52:33.997898 containerd[1477]: time="2025-02-13T15:52:33.997873218Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Feb 13 15:52:33.998022 containerd[1477]: time="2025-02-13T15:52:33.998006368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Feb 13 15:52:33.998141 containerd[1477]: time="2025-02-13T15:52:33.998125115Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:52:33.998194 containerd[1477]: time="2025-02-13T15:52:33.998180236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:52:33.998531 containerd[1477]: time="2025-02-13T15:52:33.998501866Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:52:33.999278 containerd[1477]: time="2025-02-13T15:52:33.998638373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb 13 15:52:33.999278 containerd[1477]: time="2025-02-13T15:52:33.998661308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:52:33.999278 containerd[1477]: time="2025-02-13T15:52:33.998674285Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb 13 15:52:33.999278 containerd[1477]: time="2025-02-13T15:52:33.998772437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:52:33.999278 containerd[1477]: time="2025-02-13T15:52:33.999027317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:52:33.999278 containerd[1477]: time="2025-02-13T15:52:33.999156242Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:52:33.999278 containerd[1477]: time="2025-02-13T15:52:33.999177650Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb 13 15:52:33.999568 containerd[1477]: time="2025-02-13T15:52:33.999549649Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb 13 15:52:33.999690 containerd[1477]: time="2025-02-13T15:52:33.999672193Z" level=info msg="metadata content store policy set" policy=shared
Feb 13 15:52:34.010951 containerd[1477]: time="2025-02-13T15:52:34.010891979Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb 13 15:52:34.011297 containerd[1477]: time="2025-02-13T15:52:34.011177063Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb 13 15:52:34.011373 containerd[1477]: time="2025-02-13T15:52:34.011359666Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Feb 13 15:52:34.013510 containerd[1477]: time="2025-02-13T15:52:34.011459199Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Feb 13 15:52:34.013510 containerd[1477]: time="2025-02-13T15:52:34.011481677Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb 13 15:52:34.013510 containerd[1477]: time="2025-02-13T15:52:34.011713593Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb 13 15:52:34.013510 containerd[1477]: time="2025-02-13T15:52:34.011948449Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb 13 15:52:34.013510 containerd[1477]: time="2025-02-13T15:52:34.012080454Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Feb 13 15:52:34.013510 containerd[1477]: time="2025-02-13T15:52:34.012097820Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Feb 13 15:52:34.013510 containerd[1477]: time="2025-02-13T15:52:34.012114215Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Feb 13 15:52:34.013510 containerd[1477]: time="2025-02-13T15:52:34.012129035Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb 13 15:52:34.013510 containerd[1477]: time="2025-02-13T15:52:34.012144777Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb 13 15:52:34.013510 containerd[1477]: time="2025-02-13T15:52:34.012178322Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb 13 15:52:34.013510 containerd[1477]: time="2025-02-13T15:52:34.012194085Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb 13 15:52:34.014132 containerd[1477]: time="2025-02-13T15:52:34.014006185Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb 13 15:52:34.014132 containerd[1477]: time="2025-02-13T15:52:34.014060945Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb 13 15:52:34.014132 containerd[1477]: time="2025-02-13T15:52:34.014077972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb 13 15:52:34.014132 containerd[1477]: time="2025-02-13T15:52:34.014090974Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb 13 15:52:34.014342 containerd[1477]: time="2025-02-13T15:52:34.014236324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb 13 15:52:34.014342 containerd[1477]: time="2025-02-13T15:52:34.014255999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb 13 15:52:34.014342 containerd[1477]: time="2025-02-13T15:52:34.014269665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb 13 15:52:34.014458 containerd[1477]: time="2025-02-13T15:52:34.014284039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb 13 15:52:34.014554 containerd[1477]: time="2025-02-13T15:52:34.014536543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb 13 15:52:34.014665 containerd[1477]: time="2025-02-13T15:52:34.014647745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb 13 15:52:34.014764 containerd[1477]: time="2025-02-13T15:52:34.014747568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb 13 15:52:34.014857 containerd[1477]: time="2025-02-13T15:52:34.014840759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb 13 15:52:34.014959 containerd[1477]: time="2025-02-13T15:52:34.014942355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Feb 13 15:52:34.015040 containerd[1477]: time="2025-02-13T15:52:34.015028838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Feb 13 15:52:34.016238 containerd[1477]: time="2025-02-13T15:52:34.016203278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb 13 15:52:34.016384 containerd[1477]: time="2025-02-13T15:52:34.016369216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Feb 13 15:52:34.016846 containerd[1477]: time="2025-02-13T15:52:34.016457582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb 13 15:52:34.016846 containerd[1477]: time="2025-02-13T15:52:34.016481062Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Feb 13 15:52:34.016846 containerd[1477]: time="2025-02-13T15:52:34.016526989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Feb 13 15:52:34.016846 containerd[1477]: time="2025-02-13T15:52:34.016542707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb 13 15:52:34.016846 containerd[1477]: time="2025-02-13T15:52:34.016556039Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb 13 15:52:34.016846 containerd[1477]: time="2025-02-13T15:52:34.016640064Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb 13 15:52:34.016846 containerd[1477]: time="2025-02-13T15:52:34.016660500Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Feb 13 15:52:34.016846 containerd[1477]: time="2025-02-13T15:52:34.016779396Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb 13 15:52:34.016846 containerd[1477]: time="2025-02-13T15:52:34.016810696Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Feb 13 15:52:34.016846 containerd[1477]: time="2025-02-13T15:52:34.016825432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb 13 15:52:34.017269 containerd[1477]: time="2025-02-13T15:52:34.017129440Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Feb 13 15:52:34.017269 containerd[1477]: time="2025-02-13T15:52:34.017157480Z" level=info msg="NRI interface is disabled by configuration."
Feb 13 15:52:34.017269 containerd[1477]: time="2025-02-13T15:52:34.017173792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb 13 15:52:34.018961 containerd[1477]: time="2025-02-13T15:52:34.018878729Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb 13 15:52:34.019543 containerd[1477]: time="2025-02-13T15:52:34.019233491Z" level=info msg="Connect containerd service"
Feb 13 15:52:34.019543 containerd[1477]: time="2025-02-13T15:52:34.019304374Z" level=info msg="using legacy CRI server"
Feb 13 15:52:34.019543 containerd[1477]: time="2025-02-13T15:52:34.019313806Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Feb 13 15:52:34.019543 containerd[1477]: time="2025-02-13T15:52:34.019510511Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb 13 15:52:34.023946 containerd[1477]: time="2025-02-13T15:52:34.023864802Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 15:52:34.025839 containerd[1477]: time="2025-02-13T15:52:34.025776304Z" level=info msg="Start subscribing containerd event"
Feb 13 15:52:34.026007 containerd[1477]: time="2025-02-13T15:52:34.025994396Z" level=info msg="Start recovering state"
Feb 13 15:52:34.026139 containerd[1477]: time="2025-02-13T15:52:34.026127407Z" level=info msg="Start event monitor"
Feb 13 15:52:34.026193 containerd[1477]: time="2025-02-13T15:52:34.026183769Z" level=info msg="Start snapshots syncer"
Feb 13 15:52:34.026262 containerd[1477]: time="2025-02-13T15:52:34.026251970Z" level=info msg="Start cni network conf syncer for default"
Feb 13 15:52:34.026331 containerd[1477]: time="2025-02-13T15:52:34.026321278Z" level=info msg="Start streaming server"
Feb 13 15:52:34.028115 containerd[1477]: time="2025-02-13T15:52:34.028073277Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 13 15:52:34.028456 containerd[1477]: time="2025-02-13T15:52:34.028432036Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 13 15:52:34.028795 systemd[1]: Started containerd.service - containerd container runtime.
Feb 13 15:52:34.030383 containerd[1477]: time="2025-02-13T15:52:34.029820131Z" level=info msg="containerd successfully booted in 0.091868s"
Feb 13 15:52:34.229307 sshd_keygen[1487]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb 13 15:52:34.263510 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Feb 13 15:52:34.283652 systemd[1]: Starting issuegen.service - Generate /run/issue...
Feb 13 15:52:34.296661 systemd[1]: Started sshd@0-143.110.144.28:22-139.178.89.65:50056.service - OpenSSH per-connection server daemon (139.178.89.65:50056).
Feb 13 15:52:34.312686 systemd[1]: issuegen.service: Deactivated successfully.
Feb 13 15:52:34.313628 systemd[1]: Finished issuegen.service - Generate /run/issue.
Feb 13 15:52:34.331984 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Feb 13 15:52:34.379805 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Feb 13 15:52:34.391876 systemd[1]: Started getty@tty1.service - Getty on tty1.
Feb 13 15:52:34.402871 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0.
Feb 13 15:52:34.404006 systemd[1]: Reached target getty.target - Login Prompts.
Feb 13 15:52:34.408617 systemd-networkd[1374]: eth1: Gained IPv6LL
Feb 13 15:52:34.490436 sshd[1548]: Accepted publickey for core from 139.178.89.65 port 50056 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4
Feb 13 15:52:34.493575 sshd-session[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:52:34.511311 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Feb 13 15:52:34.520692 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Feb 13 15:52:34.526567 systemd-logind[1453]: New session 1 of user core.
Feb 13 15:52:34.559102 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Feb 13 15:52:34.569746 systemd[1]: Starting user@500.service - User Manager for UID 500...
Feb 13 15:52:34.594898 (systemd)[1560]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb 13 15:52:34.742696 systemd[1560]: Queued start job for default target default.target.
Feb 13 15:52:34.750558 systemd[1560]: Created slice app.slice - User Application Slice.
Feb 13 15:52:34.750898 systemd[1560]: Reached target paths.target - Paths.
Feb 13 15:52:34.751047 systemd[1560]: Reached target timers.target - Timers.
Feb 13 15:52:34.754432 systemd[1560]: Starting dbus.socket - D-Bus User Message Bus Socket...
Feb 13 15:52:34.775411 systemd[1560]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Feb 13 15:52:34.777090 systemd[1560]: Reached target sockets.target - Sockets.
Feb 13 15:52:34.777335 systemd[1560]: Reached target basic.target - Basic System.
Feb 13 15:52:34.777579 systemd[1]: Started user@500.service - User Manager for UID 500.
Feb 13 15:52:34.777860 systemd[1560]: Reached target default.target - Main User Target.
Feb 13 15:52:34.777929 systemd[1560]: Startup finished in 169ms.
Feb 13 15:52:34.787582 systemd[1]: Started session-1.scope - Session 1 of User core.
Feb 13 15:52:34.882873 systemd[1]: Started sshd@1-143.110.144.28:22-139.178.89.65:59090.service - OpenSSH per-connection server daemon (139.178.89.65:59090).
Feb 13 15:52:35.029564 sshd[1571]: Accepted publickey for core from 139.178.89.65 port 59090 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4
Feb 13 15:52:35.031863 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:52:35.046598 systemd-logind[1453]: New session 2 of user core.
Feb 13 15:52:35.055779 systemd[1]: Started session-2.scope - Session 2 of User core.
Feb 13 15:52:35.132742 sshd[1573]: Connection closed by 139.178.89.65 port 59090
Feb 13 15:52:35.133791 sshd-session[1571]: pam_unix(sshd:session): session closed for user core
Feb 13 15:52:35.150364 systemd[1]: sshd@1-143.110.144.28:22-139.178.89.65:59090.service: Deactivated successfully.
Feb 13 15:52:35.154447 systemd[1]: session-2.scope: Deactivated successfully.
Feb 13 15:52:35.157867 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit.
Feb 13 15:52:35.168975 systemd[1]: Started sshd@2-143.110.144.28:22-139.178.89.65:59096.service - OpenSSH per-connection server daemon (139.178.89.65:59096).
Feb 13 15:52:35.178954 systemd-logind[1453]: Removed session 2.
Feb 13 15:52:35.215550 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:52:35.220713 systemd[1]: Reached target multi-user.target - Multi-User System.
Feb 13 15:52:35.223524 (kubelet)[1585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:52:35.226164 systemd[1]: Startup finished in 1.595s (kernel) + 6.146s (initrd) + 7.326s (userspace) = 15.068s.
Feb 13 15:52:35.252264 sshd[1578]: Accepted publickey for core from 139.178.89.65 port 59096 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4
Feb 13 15:52:35.255949 sshd-session[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:52:35.266727 agetty[1556]: failed to open credentials directory
Feb 13 15:52:35.277457 agetty[1557]: failed to open credentials directory
Feb 13 15:52:35.282320 systemd-logind[1453]: New session 3 of user core.
Feb 13 15:52:35.285760 systemd[1]: Started session-3.scope - Session 3 of User core.
Feb 13 15:52:35.359001 sshd[1590]: Connection closed by 139.178.89.65 port 59096
Feb 13 15:52:35.361315 sshd-session[1578]: pam_unix(sshd:session): session closed for user core
Feb 13 15:52:35.365086 systemd[1]: sshd@2-143.110.144.28:22-139.178.89.65:59096.service: Deactivated successfully.
Feb 13 15:52:35.368527 systemd[1]: session-3.scope: Deactivated successfully.
Feb 13 15:52:35.371298 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit.
Feb 13 15:52:35.372727 systemd-logind[1453]: Removed session 3.
Feb 13 15:52:36.526170 kubelet[1585]: E0213 15:52:36.525882    1585 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:52:36.531714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:52:36.532023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:52:36.533290 systemd[1]: kubelet.service: Consumed 1.789s CPU time.
Feb 13 15:52:39.909317 systemd-resolved[1326]: Clock change detected. Flushing caches.
Feb 13 15:52:39.910201 systemd-timesyncd[1346]: Contacted time server 75.72.171.171:123 (1.flatcar.pool.ntp.org).
Feb 13 15:52:39.910294 systemd-timesyncd[1346]: Initial clock synchronization to Thu 2025-02-13 15:52:39.908868 UTC.
Feb 13 15:52:40.146887 systemd[1]: Started sshd@3-143.110.144.28:22-103.148.213.242:60602.service - OpenSSH per-connection server daemon (103.148.213.242:60602).
Feb 13 15:52:41.567612 sshd[1601]: Invalid user doomi from 103.148.213.242 port 60602
Feb 13 15:52:41.854100 sshd[1601]: Received disconnect from 103.148.213.242 port 60602:11: Bye Bye [preauth]
Feb 13 15:52:41.854100 sshd[1601]: Disconnected from invalid user doomi 103.148.213.242 port 60602 [preauth]
Feb 13 15:52:41.856001 systemd[1]: sshd@3-143.110.144.28:22-103.148.213.242:60602.service: Deactivated successfully.
Feb 13 15:52:46.542762 systemd[1]: Started sshd@4-143.110.144.28:22-139.178.89.65:47008.service - OpenSSH per-connection server daemon (139.178.89.65:47008).
Feb 13 15:52:46.602202 sshd[1606]: Accepted publickey for core from 139.178.89.65 port 47008 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4
Feb 13 15:52:46.605399 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:52:46.612773 systemd-logind[1453]: New session 4 of user core.
Feb 13 15:52:46.623662 systemd[1]: Started session-4.scope - Session 4 of User core.
Feb 13 15:52:46.693458 sshd[1608]: Connection closed by 139.178.89.65 port 47008
Feb 13 15:52:46.694486 sshd-session[1606]: pam_unix(sshd:session): session closed for user core
Feb 13 15:52:46.708209 systemd[1]: sshd@4-143.110.144.28:22-139.178.89.65:47008.service: Deactivated successfully.
Feb 13 15:52:46.710538 systemd[1]: session-4.scope: Deactivated successfully.
Feb 13 15:52:46.711633 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit.
Feb 13 15:52:46.719846 systemd[1]: Started sshd@5-143.110.144.28:22-139.178.89.65:47020.service - OpenSSH per-connection server daemon (139.178.89.65:47020).
Feb 13 15:52:46.721086 systemd-logind[1453]: Removed session 4.
Feb 13 15:52:46.790538 sshd[1613]: Accepted publickey for core from 139.178.89.65 port 47020 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4
Feb 13 15:52:46.793127 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:52:46.803288 systemd-logind[1453]: New session 5 of user core.
Feb 13 15:52:46.809597 systemd[1]: Started session-5.scope - Session 5 of User core.
Feb 13 15:52:46.870250 sshd[1615]: Connection closed by 139.178.89.65 port 47020
Feb 13 15:52:46.871423 sshd-session[1613]: pam_unix(sshd:session): session closed for user core
Feb 13 15:52:46.884691 systemd[1]: sshd@5-143.110.144.28:22-139.178.89.65:47020.service: Deactivated successfully.
Feb 13 15:52:46.887138 systemd[1]: session-5.scope: Deactivated successfully.
Feb 13 15:52:46.889632 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit.
Feb 13 15:52:46.897724 systemd[1]: Started sshd@6-143.110.144.28:22-139.178.89.65:47032.service - OpenSSH per-connection server daemon (139.178.89.65:47032).
Feb 13 15:52:46.900232 systemd-logind[1453]: Removed session 5.
Feb 13 15:52:46.956746 sshd[1620]: Accepted publickey for core from 139.178.89.65 port 47032 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4
Feb 13 15:52:46.959463 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:52:46.966943 systemd-logind[1453]: New session 6 of user core.
Feb 13 15:52:46.978839 systemd[1]: Started session-6.scope - Session 6 of User core.
Feb 13 15:52:47.049227 sshd[1622]: Connection closed by 139.178.89.65 port 47032
Feb 13 15:52:47.048785 sshd-session[1620]: pam_unix(sshd:session): session closed for user core
Feb 13 15:52:47.064695 systemd[1]: sshd@6-143.110.144.28:22-139.178.89.65:47032.service: Deactivated successfully.
Feb 13 15:52:47.067758 systemd[1]: session-6.scope: Deactivated successfully.
Feb 13 15:52:47.071190 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit.
Feb 13 15:52:47.075202 systemd[1]: Started sshd@7-143.110.144.28:22-139.178.89.65:47036.service - OpenSSH per-connection server daemon (139.178.89.65:47036).
Feb 13 15:52:47.077372 systemd-logind[1453]: Removed session 6.
Feb 13 15:52:47.141859 sshd[1627]: Accepted publickey for core from 139.178.89.65 port 47036 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4
Feb 13 15:52:47.143914 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:52:47.154107 systemd-logind[1453]: New session 7 of user core.
Feb 13 15:52:47.161666 systemd[1]: Started session-7.scope - Session 7 of User core.
Feb 13 15:52:47.246593 sudo[1630]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Feb 13 15:52:47.247075 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:52:47.271565 sudo[1630]: pam_unix(sudo:session): session closed for user root
Feb 13 15:52:47.275815 sshd[1629]: Connection closed by 139.178.89.65 port 47036
Feb 13 15:52:47.277370 sshd-session[1627]: pam_unix(sshd:session): session closed for user core
Feb 13 15:52:47.287573 systemd[1]: sshd@7-143.110.144.28:22-139.178.89.65:47036.service: Deactivated successfully.
Feb 13 15:52:47.290639 systemd[1]: session-7.scope: Deactivated successfully.
Feb 13 15:52:47.295594 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit.
Feb 13 15:52:47.308733 systemd[1]: Started sshd@8-143.110.144.28:22-139.178.89.65:47048.service - OpenSSH per-connection server daemon (139.178.89.65:47048).
Feb 13 15:52:47.311668 systemd-logind[1453]: Removed session 7.
Feb 13 15:52:47.373780 sshd[1635]: Accepted publickey for core from 139.178.89.65 port 47048 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4
Feb 13 15:52:47.375915 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:52:47.383091 systemd-logind[1453]: New session 8 of user core.
Feb 13 15:52:47.390691 systemd[1]: Started session-8.scope - Session 8 of User core.
Feb 13 15:52:47.458614 sudo[1639]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Feb 13 15:52:47.459089 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:52:47.466062 sudo[1639]: pam_unix(sudo:session): session closed for user root
Feb 13 15:52:47.475334 sudo[1638]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules
Feb 13 15:52:47.476349 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:52:47.518291 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:52:47.576603 augenrules[1661]: No rules
Feb 13 15:52:47.577796 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:52:47.578655 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:52:47.580326 sudo[1638]: pam_unix(sudo:session): session closed for user root
Feb 13 15:52:47.587190 sshd[1637]: Connection closed by 139.178.89.65 port 47048
Feb 13 15:52:47.589540 sshd-session[1635]: pam_unix(sshd:session): session closed for user core
Feb 13 15:52:47.599139 systemd[1]: sshd@8-143.110.144.28:22-139.178.89.65:47048.service: Deactivated successfully.
Feb 13 15:52:47.601924 systemd[1]: session-8.scope: Deactivated successfully.
Feb 13 15:52:47.604401 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit.
Feb 13 15:52:47.609778 systemd[1]: Started sshd@9-143.110.144.28:22-139.178.89.65:47056.service - OpenSSH per-connection server daemon (139.178.89.65:47056).
Feb 13 15:52:47.612806 systemd-logind[1453]: Removed session 8.
Feb 13 15:52:47.679747 sshd[1669]: Accepted publickey for core from 139.178.89.65 port 47056 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4
Feb 13 15:52:47.681978 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:52:47.689964 systemd-logind[1453]: New session 9 of user core.
Feb 13 15:52:47.697618 systemd[1]: Started session-9.scope - Session 9 of User core.
Feb 13 15:52:47.700102 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Feb 13 15:52:47.703775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:52:47.765334 sudo[1675]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb 13 15:52:47.766351 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:52:47.929453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:52:47.943667 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:52:48.057571 kubelet[1691]: E0213 15:52:48.057271    1691 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:52:48.063731 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:52:48.063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:52:49.054893 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:52:49.066675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:52:49.101920 systemd[1]: Reloading requested from client PID 1726 ('systemctl') (unit session-9.scope)...
Feb 13 15:52:49.101947 systemd[1]: Reloading...
Feb 13 15:52:49.265271 zram_generator::config[1764]: No configuration found.
Feb 13 15:52:49.441229 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:52:49.542758 systemd[1]: Reloading finished in 440 ms.
Feb 13 15:52:49.601297 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
Feb 13 15:52:49.601434 systemd[1]: kubelet.service: Failed with result 'signal'.
Feb 13 15:52:49.601889 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:52:49.617830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:52:49.764594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:52:49.766416 (kubelet)[1817]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 15:52:49.841720 kubelet[1817]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:52:49.841720 kubelet[1817]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 13 15:52:49.841720 kubelet[1817]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:52:49.842418 kubelet[1817]: I0213 15:52:49.841833    1817 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 15:52:50.311912 kubelet[1817]: I0213 15:52:50.311864    1817 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Feb 13 15:52:50.311912 kubelet[1817]: I0213 15:52:50.311906    1817 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 15:52:50.312238 kubelet[1817]: I0213 15:52:50.312202    1817 server.go:919] "Client rotation is on, will bootstrap in background"
Feb 13 15:52:50.337823 kubelet[1817]: I0213 15:52:50.337471    1817 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 15:52:50.351802 kubelet[1817]: I0213 15:52:50.351741    1817 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 15:52:50.354293 kubelet[1817]: I0213 15:52:50.354202    1817 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 15:52:50.354578 kubelet[1817]: I0213 15:52:50.354548    1817 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb 13 15:52:50.355175 kubelet[1817]: I0213 15:52:50.355129    1817 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 15:52:50.355226 kubelet[1817]: I0213 15:52:50.355191    1817 container_manager_linux.go:301] "Creating device plugin manager"
Feb 13 15:52:50.355396 kubelet[1817]: I0213 15:52:50.355371    1817 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:52:50.355604 kubelet[1817]: I0213 15:52:50.355576    1817 kubelet.go:396] "Attempting to sync node with API server"
Feb 13 15:52:50.355736 kubelet[1817]: I0213 15:52:50.355709    1817 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 15:52:50.355778 kubelet[1817]: I0213 15:52:50.355766    1817 kubelet.go:312] "Adding apiserver pod source"
Feb 13 15:52:50.355811 kubelet[1817]: I0213 15:52:50.355800    1817 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 15:52:50.358501 kubelet[1817]: E0213 15:52:50.358442    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:50.358501 kubelet[1817]: E0213 15:52:50.358494    1817 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:50.359829 kubelet[1817]: I0213 15:52:50.359778    1817 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Feb 13 15:52:50.364060 kubelet[1817]: I0213 15:52:50.363962    1817 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 15:52:50.366283 kubelet[1817]: W0213 15:52:50.365840    1817 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb 13 15:52:50.366943 kubelet[1817]: I0213 15:52:50.366911    1817 server.go:1256] "Started kubelet"
Feb 13 15:52:50.367109 kubelet[1817]: I0213 15:52:50.367091    1817 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 15:52:50.368394 kubelet[1817]: I0213 15:52:50.368362    1817 server.go:461] "Adding debug handlers to kubelet server"
Feb 13 15:52:50.371976 kubelet[1817]: I0213 15:52:50.371933    1817 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 15:52:50.374254 kubelet[1817]: I0213 15:52:50.374200    1817 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 15:52:50.374646 kubelet[1817]: I0213 15:52:50.374528    1817 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 15:52:50.382373 kubelet[1817]: E0213 15:52:50.381847    1817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.144.28\" not found"
Feb 13 15:52:50.382373 kubelet[1817]: I0213 15:52:50.381952    1817 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb 13 15:52:50.382373 kubelet[1817]: I0213 15:52:50.382120    1817 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb 13 15:52:50.382587 kubelet[1817]: I0213 15:52:50.382543    1817 reconciler_new.go:29] "Reconciler: start to sync state"
Feb 13 15:52:50.384492 kubelet[1817]: I0213 15:52:50.384367    1817 factory.go:221] Registration of the systemd container factory successfully
Feb 13 15:52:50.386519 kubelet[1817]: E0213 15:52:50.386442    1817 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 15:52:50.386699 kubelet[1817]: I0213 15:52:50.386609    1817 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 15:52:50.391053 kubelet[1817]: I0213 15:52:50.390976    1817 factory.go:221] Registration of the containerd container factory successfully
Feb 13 15:52:50.429199 kubelet[1817]: W0213 15:52:50.427930    1817 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb 13 15:52:50.430115 kubelet[1817]: E0213 15:52:50.430076    1817 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb 13 15:52:50.430322 kubelet[1817]: I0213 15:52:50.429643    1817 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 13 15:52:50.430430 kubelet[1817]: I0213 15:52:50.430417    1817 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 13 15:52:50.430503 kubelet[1817]: I0213 15:52:50.430495    1817 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:52:50.432573 kubelet[1817]: E0213 15:52:50.432537    1817 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{143.110.144.28.1823cf722c04a3ee  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:143.110.144.28,UID:143.110.144.28,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:143.110.144.28,},FirstTimestamp:2025-02-13 15:52:50.366866414 +0000 UTC m=+0.594508470,LastTimestamp:2025-02-13 15:52:50.366866414 +0000 UTC m=+0.594508470,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:143.110.144.28,}"
Feb 13 15:52:50.434277 kubelet[1817]: W0213 15:52:50.433336    1817 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "143.110.144.28" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb 13 15:52:50.434277 kubelet[1817]: E0213 15:52:50.433369    1817 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "143.110.144.28" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb 13 15:52:50.434277 kubelet[1817]: W0213 15:52:50.433416    1817 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb 13 15:52:50.434277 kubelet[1817]: E0213 15:52:50.433425    1817 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb 13 15:52:50.434277 kubelet[1817]: E0213 15:52:50.433611    1817 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"143.110.144.28\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms"
Feb 13 15:52:50.443512 kubelet[1817]: E0213 15:52:50.443447    1817 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{143.110.144.28.1823cf722d2ed0e0  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:143.110.144.28,UID:143.110.144.28,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:143.110.144.28,},FirstTimestamp:2025-02-13 15:52:50.386407648 +0000 UTC m=+0.614049693,LastTimestamp:2025-02-13 15:52:50.386407648 +0000 UTC m=+0.614049693,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:143.110.144.28,}"
Feb 13 15:52:50.454684 kubelet[1817]: I0213 15:52:50.454623    1817 policy_none.go:49] "None policy: Start"
Feb 13 15:52:50.458213 kubelet[1817]: I0213 15:52:50.458053    1817 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 13 15:52:50.458947 kubelet[1817]: I0213 15:52:50.458900    1817 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 15:52:50.463873 kubelet[1817]: E0213 15:52:50.463832    1817 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{143.110.144.28.1823cf722fb380ad  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:143.110.144.28,UID:143.110.144.28,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 143.110.144.28 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:143.110.144.28,},FirstTimestamp:2025-02-13 15:52:50.428657837 +0000 UTC m=+0.656299882,LastTimestamp:2025-02-13 15:52:50.428657837 +0000 UTC m=+0.656299882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:143.110.144.28,}"
Feb 13 15:52:50.480523 kubelet[1817]: E0213 15:52:50.480088    1817 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{143.110.144.28.1823cf722fb39abc  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:143.110.144.28,UID:143.110.144.28,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 143.110.144.28 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:143.110.144.28,},FirstTimestamp:2025-02-13 15:52:50.428664508 +0000 UTC m=+0.656306552,LastTimestamp:2025-02-13 15:52:50.428664508 +0000 UTC m=+0.656306552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:143.110.144.28,}"
Feb 13 15:52:50.483065 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Feb 13 15:52:50.486779 kubelet[1817]: I0213 15:52:50.486733    1817 kubelet_node_status.go:73] "Attempting to register node" node="143.110.144.28"
Feb 13 15:52:50.496034 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Feb 13 15:52:50.503963 kubelet[1817]: I0213 15:52:50.503789    1817 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 15:52:50.505976 kubelet[1817]: I0213 15:52:50.505784    1817 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 15:52:50.505976 kubelet[1817]: I0213 15:52:50.505856    1817 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 13 15:52:50.505976 kubelet[1817]: I0213 15:52:50.505886    1817 kubelet.go:2329] "Starting kubelet main sync loop"
Feb 13 15:52:50.506220 kubelet[1817]: E0213 15:52:50.506001    1817 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 13 15:52:50.511058 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Feb 13 15:52:50.516755 kubelet[1817]: I0213 15:52:50.515949    1817 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 15:52:50.518331 kubelet[1817]: I0213 15:52:50.518277    1817 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 15:52:50.522257 kubelet[1817]: E0213 15:52:50.522128    1817 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"143.110.144.28\" not found"
Feb 13 15:52:50.531521 kubelet[1817]: I0213 15:52:50.531415    1817 kubelet_node_status.go:76] "Successfully registered node" node="143.110.144.28"
Feb 13 15:52:50.633281 kubelet[1817]: E0213 15:52:50.633053    1817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.144.28\" not found"
Feb 13 15:52:50.734119 kubelet[1817]: E0213 15:52:50.734053    1817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.144.28\" not found"
Feb 13 15:52:50.835190 kubelet[1817]: E0213 15:52:50.835088    1817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.144.28\" not found"
Feb 13 15:52:50.935828 kubelet[1817]: E0213 15:52:50.935627    1817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.144.28\" not found"
Feb 13 15:52:51.036664 kubelet[1817]: E0213 15:52:51.036597    1817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.144.28\" not found"
Feb 13 15:52:51.137636 kubelet[1817]: E0213 15:52:51.137563    1817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.144.28\" not found"
Feb 13 15:52:51.238917 kubelet[1817]: E0213 15:52:51.238818    1817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.144.28\" not found"
Feb 13 15:52:51.324441 sudo[1675]: pam_unix(sudo:session): session closed for user root
Feb 13 15:52:51.327710 sshd[1672]: Connection closed by 139.178.89.65 port 47056
Feb 13 15:52:51.328328 kubelet[1817]: I0213 15:52:51.328021    1817 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials"
Feb 13 15:52:51.328328 kubelet[1817]: W0213 15:52:51.328255    1817 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received
Feb 13 15:52:51.328673 sshd-session[1669]: pam_unix(sshd:session): session closed for user core
Feb 13 15:52:51.334194 systemd[1]: sshd@9-143.110.144.28:22-139.178.89.65:47056.service: Deactivated successfully.
Feb 13 15:52:51.337088 systemd[1]: session-9.scope: Deactivated successfully.
Feb 13 15:52:51.339093 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit.
Feb 13 15:52:51.339555 kubelet[1817]: E0213 15:52:51.339391    1817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.144.28\" not found"
Feb 13 15:52:51.341068 systemd-logind[1453]: Removed session 9.
Feb 13 15:52:51.359614 kubelet[1817]: E0213 15:52:51.359544    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:51.439860 kubelet[1817]: E0213 15:52:51.439690    1817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.144.28\" not found"
Feb 13 15:52:51.541247 kubelet[1817]: E0213 15:52:51.541047    1817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.144.28\" not found"
Feb 13 15:52:51.641949 kubelet[1817]: E0213 15:52:51.641871    1817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.144.28\" not found"
Feb 13 15:52:51.743018 kubelet[1817]: E0213 15:52:51.742947    1817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.144.28\" not found"
Feb 13 15:52:51.843726 kubelet[1817]: E0213 15:52:51.843522    1817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.144.28\" not found"
Feb 13 15:52:51.944557 kubelet[1817]: E0213 15:52:51.944478    1817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.144.28\" not found"
Feb 13 15:52:52.047269 kubelet[1817]: I0213 15:52:52.046737    1817 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24"
Feb 13 15:52:52.047954 kubelet[1817]: I0213 15:52:52.047566    1817 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24"
Feb 13 15:52:52.048057 containerd[1477]: time="2025-02-13T15:52:52.047259889Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb 13 15:52:52.360234 kubelet[1817]: I0213 15:52:52.359995    1817 apiserver.go:52] "Watching apiserver"
Feb 13 15:52:52.360503 kubelet[1817]: E0213 15:52:52.360465    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:52.388081 kubelet[1817]: I0213 15:52:52.387982    1817 topology_manager.go:215] "Topology Admit Handler" podUID="98e96dd8-4bbc-4b33-9139-a736feac7e57" podNamespace="calico-system" podName="calico-node-d26vf"
Feb 13 15:52:52.388319 kubelet[1817]: I0213 15:52:52.388230    1817 topology_manager.go:215] "Topology Admit Handler" podUID="0f99465c-7630-4cd4-9490-a0c5effdd0c7" podNamespace="calico-system" podName="csi-node-driver-88c9f"
Feb 13 15:52:52.388319 kubelet[1817]: I0213 15:52:52.388294    1817 topology_manager.go:215] "Topology Admit Handler" podUID="566caae6-0065-43bc-84e2-52a65e8d6bfd" podNamespace="kube-system" podName="kube-proxy-8b9kl"
Feb 13 15:52:52.390280 kubelet[1817]: E0213 15:52:52.390227    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88c9f" podUID="0f99465c-7630-4cd4-9490-a0c5effdd0c7"
Feb 13 15:52:52.428451 systemd[1]: Created slice kubepods-besteffort-pod98e96dd8_4bbc_4b33_9139_a736feac7e57.slice - libcontainer container kubepods-besteffort-pod98e96dd8_4bbc_4b33_9139_a736feac7e57.slice.
Feb 13 15:52:52.454177 systemd[1]: Created slice kubepods-besteffort-pod566caae6_0065_43bc_84e2_52a65e8d6bfd.slice - libcontainer container kubepods-besteffort-pod566caae6_0065_43bc_84e2_52a65e8d6bfd.slice.
Feb 13 15:52:52.490115 kubelet[1817]: I0213 15:52:52.489995    1817 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb 13 15:52:52.501475 kubelet[1817]: I0213 15:52:52.500685    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98e96dd8-4bbc-4b33-9139-a736feac7e57-tigera-ca-bundle\") pod \"calico-node-d26vf\" (UID: \"98e96dd8-4bbc-4b33-9139-a736feac7e57\") " pod="calico-system/calico-node-d26vf"
Feb 13 15:52:52.501475 kubelet[1817]: I0213 15:52:52.500966    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/98e96dd8-4bbc-4b33-9139-a736feac7e57-node-certs\") pod \"calico-node-d26vf\" (UID: \"98e96dd8-4bbc-4b33-9139-a736feac7e57\") " pod="calico-system/calico-node-d26vf"
Feb 13 15:52:52.501475 kubelet[1817]: I0213 15:52:52.501009    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/98e96dd8-4bbc-4b33-9139-a736feac7e57-cni-bin-dir\") pod \"calico-node-d26vf\" (UID: \"98e96dd8-4bbc-4b33-9139-a736feac7e57\") " pod="calico-system/calico-node-d26vf"
Feb 13 15:52:52.501475 kubelet[1817]: I0213 15:52:52.501047    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/98e96dd8-4bbc-4b33-9139-a736feac7e57-cni-log-dir\") pod \"calico-node-d26vf\" (UID: \"98e96dd8-4bbc-4b33-9139-a736feac7e57\") " pod="calico-system/calico-node-d26vf"
Feb 13 15:52:52.501475 kubelet[1817]: I0213 15:52:52.501080    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt6wg\" (UniqueName: \"kubernetes.io/projected/98e96dd8-4bbc-4b33-9139-a736feac7e57-kube-api-access-bt6wg\") pod \"calico-node-d26vf\" (UID: \"98e96dd8-4bbc-4b33-9139-a736feac7e57\") " pod="calico-system/calico-node-d26vf"
Feb 13 15:52:52.501826 kubelet[1817]: I0213 15:52:52.501114    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/566caae6-0065-43bc-84e2-52a65e8d6bfd-xtables-lock\") pod \"kube-proxy-8b9kl\" (UID: \"566caae6-0065-43bc-84e2-52a65e8d6bfd\") " pod="kube-system/kube-proxy-8b9kl"
Feb 13 15:52:52.501826 kubelet[1817]: I0213 15:52:52.501141    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/566caae6-0065-43bc-84e2-52a65e8d6bfd-lib-modules\") pod \"kube-proxy-8b9kl\" (UID: \"566caae6-0065-43bc-84e2-52a65e8d6bfd\") " pod="kube-system/kube-proxy-8b9kl"
Feb 13 15:52:52.501826 kubelet[1817]: I0213 15:52:52.501195    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98e96dd8-4bbc-4b33-9139-a736feac7e57-lib-modules\") pod \"calico-node-d26vf\" (UID: \"98e96dd8-4bbc-4b33-9139-a736feac7e57\") " pod="calico-system/calico-node-d26vf"
Feb 13 15:52:52.501826 kubelet[1817]: I0213 15:52:52.501280    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/98e96dd8-4bbc-4b33-9139-a736feac7e57-cni-net-dir\") pod \"calico-node-d26vf\" (UID: \"98e96dd8-4bbc-4b33-9139-a736feac7e57\") " pod="calico-system/calico-node-d26vf"
Feb 13 15:52:52.501826 kubelet[1817]: I0213 15:52:52.501316    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0f99465c-7630-4cd4-9490-a0c5effdd0c7-varrun\") pod \"csi-node-driver-88c9f\" (UID: \"0f99465c-7630-4cd4-9490-a0c5effdd0c7\") " pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:52:52.501996 kubelet[1817]: I0213 15:52:52.501345    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/566caae6-0065-43bc-84e2-52a65e8d6bfd-kube-proxy\") pod \"kube-proxy-8b9kl\" (UID: \"566caae6-0065-43bc-84e2-52a65e8d6bfd\") " pod="kube-system/kube-proxy-8b9kl"
Feb 13 15:52:52.501996 kubelet[1817]: I0213 15:52:52.501377    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg9xn\" (UniqueName: \"kubernetes.io/projected/566caae6-0065-43bc-84e2-52a65e8d6bfd-kube-api-access-cg9xn\") pod \"kube-proxy-8b9kl\" (UID: \"566caae6-0065-43bc-84e2-52a65e8d6bfd\") " pod="kube-system/kube-proxy-8b9kl"
Feb 13 15:52:52.501996 kubelet[1817]: I0213 15:52:52.501431    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98e96dd8-4bbc-4b33-9139-a736feac7e57-xtables-lock\") pod \"calico-node-d26vf\" (UID: \"98e96dd8-4bbc-4b33-9139-a736feac7e57\") " pod="calico-system/calico-node-d26vf"
Feb 13 15:52:52.501996 kubelet[1817]: I0213 15:52:52.501499    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0f99465c-7630-4cd4-9490-a0c5effdd0c7-socket-dir\") pod \"csi-node-driver-88c9f\" (UID: \"0f99465c-7630-4cd4-9490-a0c5effdd0c7\") " pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:52:52.501996 kubelet[1817]: I0213 15:52:52.501594    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/98e96dd8-4bbc-4b33-9139-a736feac7e57-policysync\") pod \"calico-node-d26vf\" (UID: \"98e96dd8-4bbc-4b33-9139-a736feac7e57\") " pod="calico-system/calico-node-d26vf"
Feb 13 15:52:52.502230 kubelet[1817]: I0213 15:52:52.501632    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/98e96dd8-4bbc-4b33-9139-a736feac7e57-var-run-calico\") pod \"calico-node-d26vf\" (UID: \"98e96dd8-4bbc-4b33-9139-a736feac7e57\") " pod="calico-system/calico-node-d26vf"
Feb 13 15:52:52.502230 kubelet[1817]: I0213 15:52:52.501662    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/98e96dd8-4bbc-4b33-9139-a736feac7e57-var-lib-calico\") pod \"calico-node-d26vf\" (UID: \"98e96dd8-4bbc-4b33-9139-a736feac7e57\") " pod="calico-system/calico-node-d26vf"
Feb 13 15:52:52.502230 kubelet[1817]: I0213 15:52:52.501701    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/98e96dd8-4bbc-4b33-9139-a736feac7e57-flexvol-driver-host\") pod \"calico-node-d26vf\" (UID: \"98e96dd8-4bbc-4b33-9139-a736feac7e57\") " pod="calico-system/calico-node-d26vf"
Feb 13 15:52:52.502230 kubelet[1817]: I0213 15:52:52.501738    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f99465c-7630-4cd4-9490-a0c5effdd0c7-kubelet-dir\") pod \"csi-node-driver-88c9f\" (UID: \"0f99465c-7630-4cd4-9490-a0c5effdd0c7\") " pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:52:52.502230 kubelet[1817]: I0213 15:52:52.501772    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0f99465c-7630-4cd4-9490-a0c5effdd0c7-registration-dir\") pod \"csi-node-driver-88c9f\" (UID: \"0f99465c-7630-4cd4-9490-a0c5effdd0c7\") " pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:52:52.502406 kubelet[1817]: I0213 15:52:52.501805    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvxmw\" (UniqueName: \"kubernetes.io/projected/0f99465c-7630-4cd4-9490-a0c5effdd0c7-kube-api-access-mvxmw\") pod \"csi-node-driver-88c9f\" (UID: \"0f99465c-7630-4cd4-9490-a0c5effdd0c7\") " pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:52:52.613205 kubelet[1817]: E0213 15:52:52.613047    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:52.613499 kubelet[1817]: W0213 15:52:52.613374    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:52.613499 kubelet[1817]: E0213 15:52:52.613438    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:52.659797 kubelet[1817]: E0213 15:52:52.659759    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:52.660176 kubelet[1817]: W0213 15:52:52.659995    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:52.660176 kubelet[1817]: E0213 15:52:52.660031    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:52.660592 kubelet[1817]: E0213 15:52:52.660575    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:52.669341 kubelet[1817]: W0213 15:52:52.668924    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:52.669341 kubelet[1817]: E0213 15:52:52.669011    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:52.686950 kubelet[1817]: E0213 15:52:52.686046    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:52.686950 kubelet[1817]: W0213 15:52:52.686082    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:52.686950 kubelet[1817]: E0213 15:52:52.686122    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:52.691671 kubelet[1817]: E0213 15:52:52.690036    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:52.691671 kubelet[1817]: W0213 15:52:52.690069    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:52.691671 kubelet[1817]: E0213 15:52:52.690108    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:52.749875 kubelet[1817]: E0213 15:52:52.749342    1817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"
Feb 13 15:52:52.750847 containerd[1477]: time="2025-02-13T15:52:52.750647717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d26vf,Uid:98e96dd8-4bbc-4b33-9139-a736feac7e57,Namespace:calico-system,Attempt:0,}"
Feb 13 15:52:52.766862 kubelet[1817]: E0213 15:52:52.766782    1817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"
Feb 13 15:52:52.767822 containerd[1477]: time="2025-02-13T15:52:52.767558657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8b9kl,Uid:566caae6-0065-43bc-84e2-52a65e8d6bfd,Namespace:kube-system,Attempt:0,}"
Feb 13 15:52:52.777222 systemd-resolved[1326]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2.
Feb 13 15:52:53.361757 kubelet[1817]: E0213 15:52:53.361689    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:53.580486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2085358157.mount: Deactivated successfully.
Feb 13 15:52:53.606359 containerd[1477]: time="2025-02-13T15:52:53.606212560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:52:53.614252 containerd[1477]: time="2025-02-13T15:52:53.613999876Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056"
Feb 13 15:52:53.617265 containerd[1477]: time="2025-02-13T15:52:53.617109038Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:52:53.621741 containerd[1477]: time="2025-02-13T15:52:53.620639556Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:52:53.622503 containerd[1477]: time="2025-02-13T15:52:53.622386356Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 15:52:53.630805 containerd[1477]: time="2025-02-13T15:52:53.630656361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:52:53.637997 containerd[1477]: time="2025-02-13T15:52:53.631732704Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 864.027807ms"
Feb 13 15:52:53.639271 containerd[1477]: time="2025-02-13T15:52:53.639215134Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 888.401777ms"
Feb 13 15:52:53.951977 containerd[1477]: time="2025-02-13T15:52:53.947201115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:52:53.952800 containerd[1477]: time="2025-02-13T15:52:53.952680975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:52:53.953135 containerd[1477]: time="2025-02-13T15:52:53.953038419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:52:53.953751 containerd[1477]: time="2025-02-13T15:52:53.953698291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:52:53.964014 containerd[1477]: time="2025-02-13T15:52:53.963822268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:52:53.964334 containerd[1477]: time="2025-02-13T15:52:53.964263240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:52:53.964526 containerd[1477]: time="2025-02-13T15:52:53.964480617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:52:53.964906 containerd[1477]: time="2025-02-13T15:52:53.964862477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:52:54.129584 systemd[1]: Started cri-containerd-c71affdbc1ace309df877475c380381842dbe1c36139a6a48b74f26af1e12cbf.scope - libcontainer container c71affdbc1ace309df877475c380381842dbe1c36139a6a48b74f26af1e12cbf.
Feb 13 15:52:54.149584 systemd[1]: Started cri-containerd-a9ff8392c03c085cf9cb1cf7d2cde3568b539ea509aca09f97f70ac291294e75.scope - libcontainer container a9ff8392c03c085cf9cb1cf7d2cde3568b539ea509aca09f97f70ac291294e75.
Feb 13 15:52:54.211712 containerd[1477]: time="2025-02-13T15:52:54.211635015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8b9kl,Uid:566caae6-0065-43bc-84e2-52a65e8d6bfd,Namespace:kube-system,Attempt:0,} returns sandbox id \"c71affdbc1ace309df877475c380381842dbe1c36139a6a48b74f26af1e12cbf\""
Feb 13 15:52:54.215854 kubelet[1817]: E0213 15:52:54.215808    1817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"
Feb 13 15:52:54.225085 containerd[1477]: time="2025-02-13T15:52:54.224275425Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\""
Feb 13 15:52:54.228230 containerd[1477]: time="2025-02-13T15:52:54.228032591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d26vf,Uid:98e96dd8-4bbc-4b33-9139-a736feac7e57,Namespace:calico-system,Attempt:0,} returns sandbox id \"a9ff8392c03c085cf9cb1cf7d2cde3568b539ea509aca09f97f70ac291294e75\""
Feb 13 15:52:54.232222 kubelet[1817]: E0213 15:52:54.232142    1817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"
Feb 13 15:52:54.362867 kubelet[1817]: E0213 15:52:54.362782    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:54.507615 kubelet[1817]: E0213 15:52:54.506961    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88c9f" podUID="0f99465c-7630-4cd4-9490-a0c5effdd0c7"
Feb 13 15:52:55.363315 kubelet[1817]: E0213 15:52:55.363236    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:55.855451 systemd-resolved[1326]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3.
Feb 13 15:52:55.958720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount697336117.mount: Deactivated successfully.
Feb 13 15:52:56.363541 kubelet[1817]: E0213 15:52:56.363479    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:56.508217 kubelet[1817]: E0213 15:52:56.506834    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88c9f" podUID="0f99465c-7630-4cd4-9490-a0c5effdd0c7"
Feb 13 15:52:56.851953 containerd[1477]: time="2025-02-13T15:52:56.850797459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:56.853524 containerd[1477]: time="2025-02-13T15:52:56.853424889Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=28620592"
Feb 13 15:52:56.856451 containerd[1477]: time="2025-02-13T15:52:56.856352740Z" level=info msg="ImageCreate event name:\"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:56.865743 containerd[1477]: time="2025-02-13T15:52:56.865659920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:56.866934 containerd[1477]: time="2025-02-13T15:52:56.866529191Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"28619611\" in 2.642039259s"
Feb 13 15:52:56.866934 containerd[1477]: time="2025-02-13T15:52:56.866591821Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\""
Feb 13 15:52:56.870136 containerd[1477]: time="2025-02-13T15:52:56.869415426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\""
Feb 13 15:52:56.870599 containerd[1477]: time="2025-02-13T15:52:56.870547473Z" level=info msg="CreateContainer within sandbox \"c71affdbc1ace309df877475c380381842dbe1c36139a6a48b74f26af1e12cbf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb 13 15:52:56.909379 containerd[1477]: time="2025-02-13T15:52:56.909304397Z" level=info msg="CreateContainer within sandbox \"c71affdbc1ace309df877475c380381842dbe1c36139a6a48b74f26af1e12cbf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"74a66d11695a771de9fa93029e171aae8d69b7d0fbff5427695dc597bb0ce041\""
Feb 13 15:52:56.911248 containerd[1477]: time="2025-02-13T15:52:56.910752548Z" level=info msg="StartContainer for \"74a66d11695a771de9fa93029e171aae8d69b7d0fbff5427695dc597bb0ce041\""
Feb 13 15:52:56.966807 systemd[1]: Started cri-containerd-74a66d11695a771de9fa93029e171aae8d69b7d0fbff5427695dc597bb0ce041.scope - libcontainer container 74a66d11695a771de9fa93029e171aae8d69b7d0fbff5427695dc597bb0ce041.
Feb 13 15:52:57.051143 containerd[1477]: time="2025-02-13T15:52:57.049429302Z" level=info msg="StartContainer for \"74a66d11695a771de9fa93029e171aae8d69b7d0fbff5427695dc597bb0ce041\" returns successfully"
Feb 13 15:52:57.363740 kubelet[1817]: E0213 15:52:57.363634    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:57.536294 kubelet[1817]: E0213 15:52:57.536102    1817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"
Feb 13 15:52:57.548047 kubelet[1817]: E0213 15:52:57.548003    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.548342 kubelet[1817]: W0213 15:52:57.548314    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.548514 kubelet[1817]: E0213 15:52:57.548467    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.549108 kubelet[1817]: E0213 15:52:57.549031    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.549108 kubelet[1817]: W0213 15:52:57.549051    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.549108 kubelet[1817]: E0213 15:52:57.549075    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.549754 kubelet[1817]: E0213 15:52:57.549595    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.549754 kubelet[1817]: W0213 15:52:57.549618    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.549754 kubelet[1817]: E0213 15:52:57.549637    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.550276 kubelet[1817]: E0213 15:52:57.550213    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.550276 kubelet[1817]: W0213 15:52:57.550227    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.550276 kubelet[1817]: E0213 15:52:57.550245    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.550845 kubelet[1817]: E0213 15:52:57.550735    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.550845 kubelet[1817]: W0213 15:52:57.550751    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.550845 kubelet[1817]: E0213 15:52:57.550784    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.551507 kubelet[1817]: E0213 15:52:57.551491    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.551610 kubelet[1817]: W0213 15:52:57.551598    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.551764 kubelet[1817]: E0213 15:52:57.551749    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.552220 kubelet[1817]: E0213 15:52:57.552205    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.552365 kubelet[1817]: W0213 15:52:57.552349    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.552517 kubelet[1817]: E0213 15:52:57.552441    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.554696 kubelet[1817]: E0213 15:52:57.554001    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.554696 kubelet[1817]: W0213 15:52:57.554611    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.554696 kubelet[1817]: E0213 15:52:57.554641    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.555702 kubelet[1817]: E0213 15:52:57.555525    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.555702 kubelet[1817]: W0213 15:52:57.555550    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.555702 kubelet[1817]: E0213 15:52:57.555576    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.556067 kubelet[1817]: E0213 15:52:57.555988    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.556067 kubelet[1817]: W0213 15:52:57.556001    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.556067 kubelet[1817]: E0213 15:52:57.556020    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.556446 kubelet[1817]: E0213 15:52:57.556366    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.556446 kubelet[1817]: W0213 15:52:57.556378    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.556446 kubelet[1817]: E0213 15:52:57.556395    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.557373 kubelet[1817]: E0213 15:52:57.557192    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.557373 kubelet[1817]: W0213 15:52:57.557231    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.557373 kubelet[1817]: E0213 15:52:57.557257    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.573298 kubelet[1817]: E0213 15:52:57.573008    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.573298 kubelet[1817]: W0213 15:52:57.573055    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.573298 kubelet[1817]: E0213 15:52:57.573097    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.575049 kubelet[1817]: I0213 15:52:57.574553    1817 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8b9kl" podStartSLOduration=4.927694488 podStartE2EDuration="7.57448488s" podCreationTimestamp="2025-02-13 15:52:50 +0000 UTC" firstStartedPulling="2025-02-13 15:52:54.220707431 +0000 UTC m=+4.448349472" lastFinishedPulling="2025-02-13 15:52:56.867497815 +0000 UTC m=+7.095139864" observedRunningTime="2025-02-13 15:52:57.574029906 +0000 UTC m=+7.801671967" watchObservedRunningTime="2025-02-13 15:52:57.57448488 +0000 UTC m=+7.802126946"
Feb 13 15:52:57.575997 kubelet[1817]: E0213 15:52:57.575818    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.575997 kubelet[1817]: W0213 15:52:57.575854    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.576493 kubelet[1817]: E0213 15:52:57.575881    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.577124 kubelet[1817]: E0213 15:52:57.576973    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.577124 kubelet[1817]: W0213 15:52:57.576997    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.577124 kubelet[1817]: E0213 15:52:57.577032    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.578599 kubelet[1817]: E0213 15:52:57.578433    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.578599 kubelet[1817]: W0213 15:52:57.578458    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.578599 kubelet[1817]: E0213 15:52:57.578493    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.579690 kubelet[1817]: E0213 15:52:57.579488    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.579690 kubelet[1817]: W0213 15:52:57.579511    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.579690 kubelet[1817]: E0213 15:52:57.579540    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.580086 kubelet[1817]: E0213 15:52:57.579940    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.580086 kubelet[1817]: W0213 15:52:57.579956    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.580086 kubelet[1817]: E0213 15:52:57.579979    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.580472 kubelet[1817]: E0213 15:52:57.580378    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.580472 kubelet[1817]: W0213 15:52:57.580395    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.580472 kubelet[1817]: E0213 15:52:57.580413    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.581207 kubelet[1817]: E0213 15:52:57.581102    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.581207 kubelet[1817]: W0213 15:52:57.581120    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.581207 kubelet[1817]: E0213 15:52:57.581138    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.582136 kubelet[1817]: E0213 15:52:57.581854    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.582136 kubelet[1817]: W0213 15:52:57.581871    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.582136 kubelet[1817]: E0213 15:52:57.581889    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.583772 kubelet[1817]: E0213 15:52:57.583512    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.583772 kubelet[1817]: W0213 15:52:57.583535    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.583772 kubelet[1817]: E0213 15:52:57.583578    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.584394 kubelet[1817]: E0213 15:52:57.584210    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.584394 kubelet[1817]: W0213 15:52:57.584230    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.584394 kubelet[1817]: E0213 15:52:57.584254    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.585296 kubelet[1817]: E0213 15:52:57.584558    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.585296 kubelet[1817]: W0213 15:52:57.584572    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.585296 kubelet[1817]: E0213 15:52:57.584593    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.586312 kubelet[1817]: E0213 15:52:57.585814    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.586312 kubelet[1817]: W0213 15:52:57.585841    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.586312 kubelet[1817]: E0213 15:52:57.585932    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.586483 kubelet[1817]: E0213 15:52:57.586342    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.586483 kubelet[1817]: W0213 15:52:57.586353    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.586483 kubelet[1817]: E0213 15:52:57.586367    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.587799 kubelet[1817]: E0213 15:52:57.586679    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.587799 kubelet[1817]: W0213 15:52:57.586709    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.587799 kubelet[1817]: E0213 15:52:57.586758    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.587799 kubelet[1817]: E0213 15:52:57.587265    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.587799 kubelet[1817]: W0213 15:52:57.587306    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.587799 kubelet[1817]: E0213 15:52:57.587326    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.587799 kubelet[1817]: E0213 15:52:57.587731    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.587799 kubelet[1817]: W0213 15:52:57.587745    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.587799 kubelet[1817]: E0213 15:52:57.587805    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.588296 kubelet[1817]: E0213 15:52:57.588253    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.588296 kubelet[1817]: W0213 15:52:57.588269    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.588385 kubelet[1817]: E0213 15:52:57.588303    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.588876 kubelet[1817]: E0213 15:52:57.588842    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.588876 kubelet[1817]: W0213 15:52:57.588876    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.588993 kubelet[1817]: E0213 15:52:57.588896    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:57.589532 kubelet[1817]: E0213 15:52:57.589483    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:57.589532 kubelet[1817]: W0213 15:52:57.589522    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:57.589662 kubelet[1817]: E0213 15:52:57.589543    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.365033 kubelet[1817]: E0213 15:52:58.364954    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:58.513509 kubelet[1817]: E0213 15:52:58.513462    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88c9f" podUID="0f99465c-7630-4cd4-9490-a0c5effdd0c7"
Feb 13 15:52:58.530836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4037083892.mount: Deactivated successfully.
Feb 13 15:52:58.539427 kubelet[1817]: E0213 15:52:58.539336    1817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"
Feb 13 15:52:58.590442 kubelet[1817]: E0213 15:52:58.590139    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.590442 kubelet[1817]: W0213 15:52:58.590260    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.590442 kubelet[1817]: E0213 15:52:58.590324    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.591488 kubelet[1817]: E0213 15:52:58.591309    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.591488 kubelet[1817]: W0213 15:52:58.591341    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.591488 kubelet[1817]: E0213 15:52:58.591420    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.592230 kubelet[1817]: E0213 15:52:58.592013    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.592230 kubelet[1817]: W0213 15:52:58.592163    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.592230 kubelet[1817]: E0213 15:52:58.592186    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.592977 kubelet[1817]: E0213 15:52:58.592604    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.592977 kubelet[1817]: W0213 15:52:58.592615    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.592977 kubelet[1817]: E0213 15:52:58.592631    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.593612 kubelet[1817]: E0213 15:52:58.593587    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.593612 kubelet[1817]: W0213 15:52:58.593605    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.593738 kubelet[1817]: E0213 15:52:58.593621    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.594299 kubelet[1817]: E0213 15:52:58.594275    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.594299 kubelet[1817]: W0213 15:52:58.594294    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.594439 kubelet[1817]: E0213 15:52:58.594313    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.594806 kubelet[1817]: E0213 15:52:58.594780    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.594806 kubelet[1817]: W0213 15:52:58.594797    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.594806 kubelet[1817]: E0213 15:52:58.594812    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.595574 kubelet[1817]: E0213 15:52:58.595551    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.595574 kubelet[1817]: W0213 15:52:58.595569    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.595710 kubelet[1817]: E0213 15:52:58.595594    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.596008 kubelet[1817]: E0213 15:52:58.595988    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.596008 kubelet[1817]: W0213 15:52:58.596008    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.596108 kubelet[1817]: E0213 15:52:58.596023    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.596650 kubelet[1817]: E0213 15:52:58.596620    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.596650 kubelet[1817]: W0213 15:52:58.596652    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.596650 kubelet[1817]: E0213 15:52:58.596687    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.598545 kubelet[1817]: E0213 15:52:58.598340    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.598545 kubelet[1817]: W0213 15:52:58.598368    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.598545 kubelet[1817]: E0213 15:52:58.598393    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.598971 kubelet[1817]: E0213 15:52:58.598801    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.598971 kubelet[1817]: W0213 15:52:58.598812    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.598971 kubelet[1817]: E0213 15:52:58.598828    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.599065 kubelet[1817]: E0213 15:52:58.599040    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.599065 kubelet[1817]: W0213 15:52:58.599047    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.599065 kubelet[1817]: E0213 15:52:58.599057    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.599261 kubelet[1817]: E0213 15:52:58.599247    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.599261 kubelet[1817]: W0213 15:52:58.599259    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.599317 kubelet[1817]: E0213 15:52:58.599269    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.599601 kubelet[1817]: E0213 15:52:58.599555    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.599601 kubelet[1817]: W0213 15:52:58.599571    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.599601 kubelet[1817]: E0213 15:52:58.599587    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.599965 kubelet[1817]: E0213 15:52:58.599826    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.599965 kubelet[1817]: W0213 15:52:58.599843    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.599965 kubelet[1817]: E0213 15:52:58.599862    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.600208 kubelet[1817]: E0213 15:52:58.600123    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.600208 kubelet[1817]: W0213 15:52:58.600141    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.600208 kubelet[1817]: E0213 15:52:58.600208    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.600635 kubelet[1817]: E0213 15:52:58.600438    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.600635 kubelet[1817]: W0213 15:52:58.600457    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.600635 kubelet[1817]: E0213 15:52:58.600476    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.601553 kubelet[1817]: E0213 15:52:58.600890    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.601553 kubelet[1817]: W0213 15:52:58.600912    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.601553 kubelet[1817]: E0213 15:52:58.600931    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.601553 kubelet[1817]: E0213 15:52:58.601127    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.601553 kubelet[1817]: W0213 15:52:58.601134    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.601553 kubelet[1817]: E0213 15:52:58.601145    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.601553 kubelet[1817]: E0213 15:52:58.601504    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.601553 kubelet[1817]: W0213 15:52:58.601518    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.601553 kubelet[1817]: E0213 15:52:58.601536    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.601886 kubelet[1817]: E0213 15:52:58.601865    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.601886 kubelet[1817]: W0213 15:52:58.601886    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.601960 kubelet[1817]: E0213 15:52:58.601908    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.602265 kubelet[1817]: E0213 15:52:58.602243    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.602265 kubelet[1817]: W0213 15:52:58.602264    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.602359 kubelet[1817]: E0213 15:52:58.602349    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.602707 kubelet[1817]: E0213 15:52:58.602686    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.602707 kubelet[1817]: W0213 15:52:58.602702    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.602894 kubelet[1817]: E0213 15:52:58.602775    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.603063 kubelet[1817]: E0213 15:52:58.603041    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.603107 kubelet[1817]: W0213 15:52:58.603064    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.603107 kubelet[1817]: E0213 15:52:58.603087    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.603496 kubelet[1817]: E0213 15:52:58.603476    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.603537 kubelet[1817]: W0213 15:52:58.603496    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.603863 kubelet[1817]: E0213 15:52:58.603566    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.603863 kubelet[1817]: E0213 15:52:58.603774    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.603863 kubelet[1817]: W0213 15:52:58.603784    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.603863 kubelet[1817]: E0213 15:52:58.603814    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.604135 kubelet[1817]: E0213 15:52:58.604112    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.604135 kubelet[1817]: W0213 15:52:58.604134    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.604224 kubelet[1817]: E0213 15:52:58.604178    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.604498 kubelet[1817]: E0213 15:52:58.604481    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.604498 kubelet[1817]: W0213 15:52:58.604496    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.604561 kubelet[1817]: E0213 15:52:58.604513    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.605096 kubelet[1817]: E0213 15:52:58.605074    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.605096 kubelet[1817]: W0213 15:52:58.605096    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.605614 kubelet[1817]: E0213 15:52:58.605290    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.605614 kubelet[1817]: E0213 15:52:58.605519    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.605614 kubelet[1817]: W0213 15:52:58.605531    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.605614 kubelet[1817]: E0213 15:52:58.605549    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.606119 kubelet[1817]: E0213 15:52:58.606094    1817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:52:58.606119 kubelet[1817]: W0213 15:52:58.606113    1817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:52:58.606119 kubelet[1817]: E0213 15:52:58.606127    1817 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:52:58.740600 containerd[1477]: time="2025-02-13T15:52:58.740482060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:58.743505 containerd[1477]: time="2025-02-13T15:52:58.743372571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343"
Feb 13 15:52:58.746474 containerd[1477]: time="2025-02-13T15:52:58.746095901Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:58.750956 containerd[1477]: time="2025-02-13T15:52:58.750879060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:58.753460 containerd[1477]: time="2025-02-13T15:52:58.752313532Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.882842944s"
Feb 13 15:52:58.753460 containerd[1477]: time="2025-02-13T15:52:58.752384052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\""
Feb 13 15:52:58.756196 containerd[1477]: time="2025-02-13T15:52:58.756055283Z" level=info msg="CreateContainer within sandbox \"a9ff8392c03c085cf9cb1cf7d2cde3568b539ea509aca09f97f70ac291294e75\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}"
Feb 13 15:52:58.795997 containerd[1477]: time="2025-02-13T15:52:58.795936218Z" level=info msg="CreateContainer within sandbox \"a9ff8392c03c085cf9cb1cf7d2cde3568b539ea509aca09f97f70ac291294e75\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"92825ac56a9e4e58377d86dd3ea4359ae8513edcbbb114e62157a4c845f64cfb\""
Feb 13 15:52:58.797450 containerd[1477]: time="2025-02-13T15:52:58.797208826Z" level=info msg="StartContainer for \"92825ac56a9e4e58377d86dd3ea4359ae8513edcbbb114e62157a4c845f64cfb\""
Feb 13 15:52:58.851512 systemd[1]: Started cri-containerd-92825ac56a9e4e58377d86dd3ea4359ae8513edcbbb114e62157a4c845f64cfb.scope - libcontainer container 92825ac56a9e4e58377d86dd3ea4359ae8513edcbbb114e62157a4c845f64cfb.
Feb 13 15:52:58.929625 systemd-resolved[1326]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2.
Feb 13 15:52:58.939652 containerd[1477]: time="2025-02-13T15:52:58.939587039Z" level=info msg="StartContainer for \"92825ac56a9e4e58377d86dd3ea4359ae8513edcbbb114e62157a4c845f64cfb\" returns successfully"
Feb 13 15:52:58.965933 systemd[1]: cri-containerd-92825ac56a9e4e58377d86dd3ea4359ae8513edcbbb114e62157a4c845f64cfb.scope: Deactivated successfully.
Feb 13 15:52:59.149526 containerd[1477]: time="2025-02-13T15:52:59.149265822Z" level=info msg="shim disconnected" id=92825ac56a9e4e58377d86dd3ea4359ae8513edcbbb114e62157a4c845f64cfb namespace=k8s.io
Feb 13 15:52:59.151611 containerd[1477]: time="2025-02-13T15:52:59.150081707Z" level=warning msg="cleaning up after shim disconnected" id=92825ac56a9e4e58377d86dd3ea4359ae8513edcbbb114e62157a4c845f64cfb namespace=k8s.io
Feb 13 15:52:59.151611 containerd[1477]: time="2025-02-13T15:52:59.150566823Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:52:59.365452 kubelet[1817]: E0213 15:52:59.365360    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:59.455439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92825ac56a9e4e58377d86dd3ea4359ae8513edcbbb114e62157a4c845f64cfb-rootfs.mount: Deactivated successfully.
Feb 13 15:52:59.543896 kubelet[1817]: E0213 15:52:59.543251    1817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"
Feb 13 15:52:59.544760 containerd[1477]: time="2025-02-13T15:52:59.544331883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\""
Feb 13 15:53:00.366710 kubelet[1817]: E0213 15:53:00.366574    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:00.508185 kubelet[1817]: E0213 15:53:00.507642    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88c9f" podUID="0f99465c-7630-4cd4-9490-a0c5effdd0c7"
Feb 13 15:53:01.370971 kubelet[1817]: E0213 15:53:01.370875    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:02.387571 kubelet[1817]: E0213 15:53:02.371214    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:02.507598 kubelet[1817]: E0213 15:53:02.506938    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88c9f" podUID="0f99465c-7630-4cd4-9490-a0c5effdd0c7"
Feb 13 15:53:03.372811 kubelet[1817]: E0213 15:53:03.372009    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:04.377453 kubelet[1817]: E0213 15:53:04.377342    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:04.509871 kubelet[1817]: E0213 15:53:04.509374    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88c9f" podUID="0f99465c-7630-4cd4-9490-a0c5effdd0c7"
Feb 13 15:53:05.377794 kubelet[1817]: E0213 15:53:05.377593    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:06.377902 kubelet[1817]: E0213 15:53:06.377780    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:06.507884 kubelet[1817]: E0213 15:53:06.507816    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88c9f" podUID="0f99465c-7630-4cd4-9490-a0c5effdd0c7"
Feb 13 15:53:06.892124 containerd[1477]: time="2025-02-13T15:53:06.892016511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:53:06.895370 containerd[1477]: time="2025-02-13T15:53:06.895281534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154"
Feb 13 15:53:06.898227 containerd[1477]: time="2025-02-13T15:53:06.898109134Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:53:06.903866 containerd[1477]: time="2025-02-13T15:53:06.903743018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:53:06.906995 containerd[1477]: time="2025-02-13T15:53:06.906911051Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 7.362517242s"
Feb 13 15:53:06.906995 containerd[1477]: time="2025-02-13T15:53:06.906983029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\""
Feb 13 15:53:06.915118 containerd[1477]: time="2025-02-13T15:53:06.914743615Z" level=info msg="CreateContainer within sandbox \"a9ff8392c03c085cf9cb1cf7d2cde3568b539ea509aca09f97f70ac291294e75\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Feb 13 15:53:06.950576 containerd[1477]: time="2025-02-13T15:53:06.949592059Z" level=info msg="CreateContainer within sandbox \"a9ff8392c03c085cf9cb1cf7d2cde3568b539ea509aca09f97f70ac291294e75\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"726b73a5b40bb66edbf7f24e45e3ace6e4d39a6997527384b93729c9e0cb7544\""
Feb 13 15:53:06.951035 containerd[1477]: time="2025-02-13T15:53:06.950992039Z" level=info msg="StartContainer for \"726b73a5b40bb66edbf7f24e45e3ace6e4d39a6997527384b93729c9e0cb7544\""
Feb 13 15:53:07.034336 systemd[1]: Started cri-containerd-726b73a5b40bb66edbf7f24e45e3ace6e4d39a6997527384b93729c9e0cb7544.scope - libcontainer container 726b73a5b40bb66edbf7f24e45e3ace6e4d39a6997527384b93729c9e0cb7544.
Feb 13 15:53:07.091289 containerd[1477]: time="2025-02-13T15:53:07.091137847Z" level=info msg="StartContainer for \"726b73a5b40bb66edbf7f24e45e3ace6e4d39a6997527384b93729c9e0cb7544\" returns successfully"
Feb 13 15:53:07.379290 kubelet[1817]: E0213 15:53:07.379189    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:07.583342 kubelet[1817]: E0213 15:53:07.582576    1817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"
Feb 13 15:53:08.156458 systemd[1]: cri-containerd-726b73a5b40bb66edbf7f24e45e3ace6e4d39a6997527384b93729c9e0cb7544.scope: Deactivated successfully.
Feb 13 15:53:08.207835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-726b73a5b40bb66edbf7f24e45e3ace6e4d39a6997527384b93729c9e0cb7544-rootfs.mount: Deactivated successfully.
Feb 13 15:53:08.215723 kubelet[1817]: I0213 15:53:08.214801    1817 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Feb 13 15:53:08.379866 kubelet[1817]: E0213 15:53:08.379781    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:08.432041 containerd[1477]: time="2025-02-13T15:53:08.431406752Z" level=info msg="shim disconnected" id=726b73a5b40bb66edbf7f24e45e3ace6e4d39a6997527384b93729c9e0cb7544 namespace=k8s.io
Feb 13 15:53:08.432041 containerd[1477]: time="2025-02-13T15:53:08.431529735Z" level=warning msg="cleaning up after shim disconnected" id=726b73a5b40bb66edbf7f24e45e3ace6e4d39a6997527384b93729c9e0cb7544 namespace=k8s.io
Feb 13 15:53:08.432041 containerd[1477]: time="2025-02-13T15:53:08.431544207Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:53:08.525503 systemd[1]: Created slice kubepods-besteffort-pod0f99465c_7630_4cd4_9490_a0c5effdd0c7.slice - libcontainer container kubepods-besteffort-pod0f99465c_7630_4cd4_9490_a0c5effdd0c7.slice.
Feb 13 15:53:08.530697 containerd[1477]: time="2025-02-13T15:53:08.529955663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:0,}"
Feb 13 15:53:08.590493 kubelet[1817]: E0213 15:53:08.589853    1817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"
Feb 13 15:53:08.592472 containerd[1477]: time="2025-02-13T15:53:08.592406438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\""
Feb 13 15:53:08.661649 containerd[1477]: time="2025-02-13T15:53:08.661555663Z" level=error msg="Failed to destroy network for sandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:08.664276 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede-shm.mount: Deactivated successfully.
Feb 13 15:53:08.664442 containerd[1477]: time="2025-02-13T15:53:08.664399560Z" level=error msg="encountered an error cleaning up failed sandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:08.665001 containerd[1477]: time="2025-02-13T15:53:08.664904282Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:08.666455 kubelet[1817]: E0213 15:53:08.666417    1817 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:08.667049 kubelet[1817]: E0213 15:53:08.666974    1817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:08.667270 kubelet[1817]: E0213 15:53:08.667199    1817 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:08.667802 kubelet[1817]: E0213 15:53:08.667781    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-88c9f" podUID="0f99465c-7630-4cd4-9490-a0c5effdd0c7"
Feb 13 15:53:09.381322 kubelet[1817]: E0213 15:53:09.380976    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:09.592400 kubelet[1817]: I0213 15:53:09.592327    1817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede"
Feb 13 15:53:09.596233 containerd[1477]: time="2025-02-13T15:53:09.593909475Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\""
Feb 13 15:53:09.596233 containerd[1477]: time="2025-02-13T15:53:09.594261989Z" level=info msg="Ensure that sandbox 1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede in task-service has been cleanup successfully"
Feb 13 15:53:09.597376 containerd[1477]: time="2025-02-13T15:53:09.597298341Z" level=info msg="TearDown network for sandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" successfully"
Feb 13 15:53:09.597376 containerd[1477]: time="2025-02-13T15:53:09.597355216Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" returns successfully"
Feb 13 15:53:09.598512 containerd[1477]: time="2025-02-13T15:53:09.598429520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:1,}"
Feb 13 15:53:09.599092 systemd[1]: run-netns-cni\x2d42cc4a05\x2db043\x2d28f4\x2d5fc8\x2dd032caefe277.mount: Deactivated successfully.
Feb 13 15:53:09.725474 containerd[1477]: time="2025-02-13T15:53:09.725397658Z" level=error msg="Failed to destroy network for sandbox \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:09.727714 containerd[1477]: time="2025-02-13T15:53:09.727583114Z" level=error msg="encountered an error cleaning up failed sandbox \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:09.727993 containerd[1477]: time="2025-02-13T15:53:09.727724878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:09.729493 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac-shm.mount: Deactivated successfully.
Feb 13 15:53:09.730189 kubelet[1817]: E0213 15:53:09.730118    1817 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:09.730456 kubelet[1817]: E0213 15:53:09.730229    1817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:09.730456 kubelet[1817]: E0213 15:53:09.730265    1817 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:09.730456 kubelet[1817]: E0213 15:53:09.730328    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-88c9f" podUID="0f99465c-7630-4cd4-9490-a0c5effdd0c7"
Feb 13 15:53:10.357214 kubelet[1817]: E0213 15:53:10.357123    1817 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:10.382177 kubelet[1817]: E0213 15:53:10.382068    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:10.597143 kubelet[1817]: I0213 15:53:10.596210    1817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac"
Feb 13 15:53:10.597873 containerd[1477]: time="2025-02-13T15:53:10.597517684Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\""
Feb 13 15:53:10.598444 containerd[1477]: time="2025-02-13T15:53:10.597876955Z" level=info msg="Ensure that sandbox 91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac in task-service has been cleanup successfully"
Feb 13 15:53:10.598444 containerd[1477]: time="2025-02-13T15:53:10.598163049Z" level=info msg="TearDown network for sandbox \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" successfully"
Feb 13 15:53:10.598444 containerd[1477]: time="2025-02-13T15:53:10.598191970Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" returns successfully"
Feb 13 15:53:10.604188 containerd[1477]: time="2025-02-13T15:53:10.600866589Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\""
Feb 13 15:53:10.604188 containerd[1477]: time="2025-02-13T15:53:10.601110801Z" level=info msg="TearDown network for sandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" successfully"
Feb 13 15:53:10.604188 containerd[1477]: time="2025-02-13T15:53:10.601234418Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" returns successfully"
Feb 13 15:53:10.602244 systemd[1]: run-netns-cni\x2d5d8cb65b\x2d7b1f\x2d6184\x2d01f0\x2dfb6687a98cc4.mount: Deactivated successfully.
Feb 13 15:53:10.604829 containerd[1477]: time="2025-02-13T15:53:10.604284858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:2,}"
Feb 13 15:53:10.769864 containerd[1477]: time="2025-02-13T15:53:10.764924601Z" level=error msg="Failed to destroy network for sandbox \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:10.769864 containerd[1477]: time="2025-02-13T15:53:10.766551854Z" level=error msg="encountered an error cleaning up failed sandbox \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:10.769864 containerd[1477]: time="2025-02-13T15:53:10.766662619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:10.770213 kubelet[1817]: E0213 15:53:10.768355    1817 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:10.770213 kubelet[1817]: E0213 15:53:10.768428    1817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:10.770213 kubelet[1817]: E0213 15:53:10.768461    1817 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:10.768072 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9-shm.mount: Deactivated successfully.
Feb 13 15:53:10.770485 kubelet[1817]: E0213 15:53:10.768553    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-88c9f" podUID="0f99465c-7630-4cd4-9490-a0c5effdd0c7"
Feb 13 15:53:11.382848 kubelet[1817]: E0213 15:53:11.382758    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:11.541022 kubelet[1817]: I0213 15:53:11.540959    1817 topology_manager.go:215] "Topology Admit Handler" podUID="1fda4028-67fe-488c-814b-ef072254e845" podNamespace="default" podName="nginx-deployment-6d5f899847-5kpsz"
Feb 13 15:53:11.556762 systemd[1]: Created slice kubepods-besteffort-pod1fda4028_67fe_488c_814b_ef072254e845.slice - libcontainer container kubepods-besteffort-pod1fda4028_67fe_488c_814b_ef072254e845.slice.
Feb 13 15:53:11.606337 kubelet[1817]: I0213 15:53:11.604774    1817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9"
Feb 13 15:53:11.606501 containerd[1477]: time="2025-02-13T15:53:11.606217304Z" level=info msg="StopPodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\""
Feb 13 15:53:11.608286 containerd[1477]: time="2025-02-13T15:53:11.606545950Z" level=info msg="Ensure that sandbox 0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9 in task-service has been cleanup successfully"
Feb 13 15:53:11.610476 systemd[1]: run-netns-cni\x2d7caf6c63\x2d828b\x2ddd50\x2d7776\x2df16c20357e91.mount: Deactivated successfully.
Feb 13 15:53:11.613200 containerd[1477]: time="2025-02-13T15:53:11.610806135Z" level=info msg="TearDown network for sandbox \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" successfully"
Feb 13 15:53:11.613200 containerd[1477]: time="2025-02-13T15:53:11.610849540Z" level=info msg="StopPodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" returns successfully"
Feb 13 15:53:11.613200 containerd[1477]: time="2025-02-13T15:53:11.611733619Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\""
Feb 13 15:53:11.613200 containerd[1477]: time="2025-02-13T15:53:11.611897174Z" level=info msg="TearDown network for sandbox \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" successfully"
Feb 13 15:53:11.613200 containerd[1477]: time="2025-02-13T15:53:11.611913805Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" returns successfully"
Feb 13 15:53:11.613200 containerd[1477]: time="2025-02-13T15:53:11.612470958Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\""
Feb 13 15:53:11.613200 containerd[1477]: time="2025-02-13T15:53:11.612625005Z" level=info msg="TearDown network for sandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" successfully"
Feb 13 15:53:11.613200 containerd[1477]: time="2025-02-13T15:53:11.612683440Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" returns successfully"
Feb 13 15:53:11.614235 containerd[1477]: time="2025-02-13T15:53:11.613680592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:3,}"
Feb 13 15:53:11.686968 kubelet[1817]: I0213 15:53:11.686629    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvpj4\" (UniqueName: \"kubernetes.io/projected/1fda4028-67fe-488c-814b-ef072254e845-kube-api-access-vvpj4\") pod \"nginx-deployment-6d5f899847-5kpsz\" (UID: \"1fda4028-67fe-488c-814b-ef072254e845\") " pod="default/nginx-deployment-6d5f899847-5kpsz"
Feb 13 15:53:11.767960 containerd[1477]: time="2025-02-13T15:53:11.767871645Z" level=error msg="Failed to destroy network for sandbox \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:11.770760 containerd[1477]: time="2025-02-13T15:53:11.770493324Z" level=error msg="encountered an error cleaning up failed sandbox \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:11.770760 containerd[1477]: time="2025-02-13T15:53:11.770592530Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:11.772268 kubelet[1817]: E0213 15:53:11.771460    1817 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:11.772268 kubelet[1817]: E0213 15:53:11.771523    1817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:11.772268 kubelet[1817]: E0213 15:53:11.771545    1817 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:11.772468 kubelet[1817]: E0213 15:53:11.771601    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-88c9f" podUID="0f99465c-7630-4cd4-9490-a0c5effdd0c7"
Feb 13 15:53:11.773704 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc-shm.mount: Deactivated successfully.
Feb 13 15:53:11.866061 containerd[1477]: time="2025-02-13T15:53:11.865646679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5kpsz,Uid:1fda4028-67fe-488c-814b-ef072254e845,Namespace:default,Attempt:0,}"
Feb 13 15:53:12.019643 containerd[1477]: time="2025-02-13T15:53:12.019221818Z" level=error msg="Failed to destroy network for sandbox \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:12.020786 containerd[1477]: time="2025-02-13T15:53:12.020610552Z" level=error msg="encountered an error cleaning up failed sandbox \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:12.020786 containerd[1477]: time="2025-02-13T15:53:12.020754816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5kpsz,Uid:1fda4028-67fe-488c-814b-ef072254e845,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:12.021812 kubelet[1817]: E0213 15:53:12.021136    1817 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:12.021812 kubelet[1817]: E0213 15:53:12.021326    1817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-5kpsz"
Feb 13 15:53:12.021812 kubelet[1817]: E0213 15:53:12.021371    1817 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-5kpsz"
Feb 13 15:53:12.022054 kubelet[1817]: E0213 15:53:12.021497    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-5kpsz_default(1fda4028-67fe-488c-814b-ef072254e845)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-5kpsz_default(1fda4028-67fe-488c-814b-ef072254e845)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-5kpsz" podUID="1fda4028-67fe-488c-814b-ef072254e845"
Feb 13 15:53:12.394705 kubelet[1817]: E0213 15:53:12.394389    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:12.615238 kubelet[1817]: I0213 15:53:12.613210    1817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b"
Feb 13 15:53:12.616065 containerd[1477]: time="2025-02-13T15:53:12.616010264Z" level=info msg="StopPodSandbox for \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\""
Feb 13 15:53:12.619191 containerd[1477]: time="2025-02-13T15:53:12.617513444Z" level=info msg="Ensure that sandbox 5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b in task-service has been cleanup successfully"
Feb 13 15:53:12.620859 systemd[1]: run-netns-cni\x2daf1ae6b8\x2d7a2e\x2dfcf9\x2d6cbc\x2d3aa9edd7ec12.mount: Deactivated successfully.
Feb 13 15:53:12.622657 containerd[1477]: time="2025-02-13T15:53:12.622603239Z" level=info msg="TearDown network for sandbox \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\" successfully"
Feb 13 15:53:12.624334 containerd[1477]: time="2025-02-13T15:53:12.624282711Z" level=info msg="StopPodSandbox for \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\" returns successfully"
Feb 13 15:53:12.625586 containerd[1477]: time="2025-02-13T15:53:12.625508840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5kpsz,Uid:1fda4028-67fe-488c-814b-ef072254e845,Namespace:default,Attempt:1,}"
Feb 13 15:53:12.635215 kubelet[1817]: I0213 15:53:12.634819    1817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc"
Feb 13 15:53:12.642381 containerd[1477]: time="2025-02-13T15:53:12.642304070Z" level=info msg="StopPodSandbox for \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\""
Feb 13 15:53:12.642700 containerd[1477]: time="2025-02-13T15:53:12.642669963Z" level=info msg="Ensure that sandbox cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc in task-service has been cleanup successfully"
Feb 13 15:53:12.646654 systemd[1]: run-netns-cni\x2da071a055\x2dafde\x2d3f29\x2d328b\x2de8e57e963ef9.mount: Deactivated successfully.
Feb 13 15:53:12.651405 containerd[1477]: time="2025-02-13T15:53:12.650702516Z" level=info msg="TearDown network for sandbox \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\" successfully"
Feb 13 15:53:12.652300 containerd[1477]: time="2025-02-13T15:53:12.651611996Z" level=info msg="StopPodSandbox for \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\" returns successfully"
Feb 13 15:53:12.653543 containerd[1477]: time="2025-02-13T15:53:12.653478645Z" level=info msg="StopPodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\""
Feb 13 15:53:12.653832 containerd[1477]: time="2025-02-13T15:53:12.653656296Z" level=info msg="TearDown network for sandbox \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" successfully"
Feb 13 15:53:12.653832 containerd[1477]: time="2025-02-13T15:53:12.653674398Z" level=info msg="StopPodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" returns successfully"
Feb 13 15:53:12.655708 containerd[1477]: time="2025-02-13T15:53:12.655651704Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\""
Feb 13 15:53:12.656889 containerd[1477]: time="2025-02-13T15:53:12.656723632Z" level=info msg="TearDown network for sandbox \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" successfully"
Feb 13 15:53:12.656889 containerd[1477]: time="2025-02-13T15:53:12.656772572Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" returns successfully"
Feb 13 15:53:12.658064 containerd[1477]: time="2025-02-13T15:53:12.657778476Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\""
Feb 13 15:53:12.658455 containerd[1477]: time="2025-02-13T15:53:12.658363531Z" level=info msg="TearDown network for sandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" successfully"
Feb 13 15:53:12.658455 containerd[1477]: time="2025-02-13T15:53:12.658397373Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" returns successfully"
Feb 13 15:53:12.659397 containerd[1477]: time="2025-02-13T15:53:12.659319524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:4,}"
Feb 13 15:53:12.897300 containerd[1477]: time="2025-02-13T15:53:12.896646279Z" level=error msg="Failed to destroy network for sandbox \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:12.898943 containerd[1477]: time="2025-02-13T15:53:12.898866262Z" level=error msg="encountered an error cleaning up failed sandbox \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:12.899291 containerd[1477]: time="2025-02-13T15:53:12.898997660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5kpsz,Uid:1fda4028-67fe-488c-814b-ef072254e845,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:12.899467 kubelet[1817]: E0213 15:53:12.899421    1817 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:12.899625 kubelet[1817]: E0213 15:53:12.899509    1817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-5kpsz"
Feb 13 15:53:12.899625 kubelet[1817]: E0213 15:53:12.899552    1817 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-5kpsz"
Feb 13 15:53:12.899770 kubelet[1817]: E0213 15:53:12.899636    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-5kpsz_default(1fda4028-67fe-488c-814b-ef072254e845)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-5kpsz_default(1fda4028-67fe-488c-814b-ef072254e845)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-5kpsz" podUID="1fda4028-67fe-488c-814b-ef072254e845"
Feb 13 15:53:12.905071 containerd[1477]: time="2025-02-13T15:53:12.904964886Z" level=error msg="Failed to destroy network for sandbox \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:12.905878 containerd[1477]: time="2025-02-13T15:53:12.905811665Z" level=error msg="encountered an error cleaning up failed sandbox \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:12.906429 containerd[1477]: time="2025-02-13T15:53:12.906320546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:12.906956 kubelet[1817]: E0213 15:53:12.906782    1817 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:12.906956 kubelet[1817]: E0213 15:53:12.906879    1817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:12.906956 kubelet[1817]: E0213 15:53:12.906911    1817 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:12.907269 kubelet[1817]: E0213 15:53:12.906992    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-88c9f" podUID="0f99465c-7630-4cd4-9490-a0c5effdd0c7"
Feb 13 15:53:13.395545 kubelet[1817]: E0213 15:53:13.395462    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:13.623715 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825-shm.mount: Deactivated successfully.
Feb 13 15:53:13.642039 kubelet[1817]: I0213 15:53:13.641334    1817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825"
Feb 13 15:53:13.643475 containerd[1477]: time="2025-02-13T15:53:13.642534361Z" level=info msg="StopPodSandbox for \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\""
Feb 13 15:53:13.643475 containerd[1477]: time="2025-02-13T15:53:13.642822487Z" level=info msg="Ensure that sandbox 4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825 in task-service has been cleanup successfully"
Feb 13 15:53:13.648127 systemd[1]: run-netns-cni\x2d2376528b\x2da6e1\x2dc033\x2d984f\x2d0e93da160dcc.mount: Deactivated successfully.
Feb 13 15:53:13.651256 containerd[1477]: time="2025-02-13T15:53:13.650981804Z" level=info msg="TearDown network for sandbox \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\" successfully"
Feb 13 15:53:13.652065 containerd[1477]: time="2025-02-13T15:53:13.651724899Z" level=info msg="StopPodSandbox for \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\" returns successfully"
Feb 13 15:53:13.658722 containerd[1477]: time="2025-02-13T15:53:13.654228290Z" level=info msg="StopPodSandbox for \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\""
Feb 13 15:53:13.658722 containerd[1477]: time="2025-02-13T15:53:13.656496778Z" level=info msg="TearDown network for sandbox \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\" successfully"
Feb 13 15:53:13.658722 containerd[1477]: time="2025-02-13T15:53:13.656532022Z" level=info msg="StopPodSandbox for \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\" returns successfully"
Feb 13 15:53:13.661135 containerd[1477]: time="2025-02-13T15:53:13.660609790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5kpsz,Uid:1fda4028-67fe-488c-814b-ef072254e845,Namespace:default,Attempt:2,}"
Feb 13 15:53:13.661563 kubelet[1817]: I0213 15:53:13.661527    1817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6"
Feb 13 15:53:13.662668 containerd[1477]: time="2025-02-13T15:53:13.662619700Z" level=info msg="StopPodSandbox for \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\""
Feb 13 15:53:13.663297 containerd[1477]: time="2025-02-13T15:53:13.663188112Z" level=info msg="Ensure that sandbox 61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6 in task-service has been cleanup successfully"
Feb 13 15:53:13.664018 containerd[1477]: time="2025-02-13T15:53:13.663988593Z" level=info msg="TearDown network for sandbox \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\" successfully"
Feb 13 15:53:13.665485 containerd[1477]: time="2025-02-13T15:53:13.665391766Z" level=info msg="StopPodSandbox for \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\" returns successfully"
Feb 13 15:53:13.667779 systemd[1]: run-netns-cni\x2dab7e96bb\x2dc601\x2d6d27\x2da02b\x2d0b04df9109b3.mount: Deactivated successfully.
Feb 13 15:53:13.671409 containerd[1477]: time="2025-02-13T15:53:13.671322699Z" level=info msg="StopPodSandbox for \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\""
Feb 13 15:53:13.671585 containerd[1477]: time="2025-02-13T15:53:13.671527268Z" level=info msg="TearDown network for sandbox \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\" successfully"
Feb 13 15:53:13.671585 containerd[1477]: time="2025-02-13T15:53:13.671550171Z" level=info msg="StopPodSandbox for \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\" returns successfully"
Feb 13 15:53:13.672419 containerd[1477]: time="2025-02-13T15:53:13.672252469Z" level=info msg="StopPodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\""
Feb 13 15:53:13.673503 containerd[1477]: time="2025-02-13T15:53:13.672503595Z" level=info msg="TearDown network for sandbox \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" successfully"
Feb 13 15:53:13.673503 containerd[1477]: time="2025-02-13T15:53:13.672524511Z" level=info msg="StopPodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" returns successfully"
Feb 13 15:53:13.673503 containerd[1477]: time="2025-02-13T15:53:13.673244739Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\""
Feb 13 15:53:13.673503 containerd[1477]: time="2025-02-13T15:53:13.673366918Z" level=info msg="TearDown network for sandbox \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" successfully"
Feb 13 15:53:13.673503 containerd[1477]: time="2025-02-13T15:53:13.673387278Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" returns successfully"
Feb 13 15:53:13.674862 containerd[1477]: time="2025-02-13T15:53:13.674546402Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\""
Feb 13 15:53:13.674862 containerd[1477]: time="2025-02-13T15:53:13.674692179Z" level=info msg="TearDown network for sandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" successfully"
Feb 13 15:53:13.674862 containerd[1477]: time="2025-02-13T15:53:13.674720428Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" returns successfully"
Feb 13 15:53:13.677939 containerd[1477]: time="2025-02-13T15:53:13.677591491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:5,}"
Feb 13 15:53:13.920700 containerd[1477]: time="2025-02-13T15:53:13.920397999Z" level=error msg="Failed to destroy network for sandbox \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:13.922485 containerd[1477]: time="2025-02-13T15:53:13.921243865Z" level=error msg="encountered an error cleaning up failed sandbox \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:13.922485 containerd[1477]: time="2025-02-13T15:53:13.921363853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:13.924212 kubelet[1817]: E0213 15:53:13.922980    1817 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:13.924212 kubelet[1817]: E0213 15:53:13.923071    1817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:13.924212 kubelet[1817]: E0213 15:53:13.923110    1817 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:13.924495 kubelet[1817]: E0213 15:53:13.923229    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-88c9f" podUID="0f99465c-7630-4cd4-9490-a0c5effdd0c7"
Feb 13 15:53:13.944362 containerd[1477]: time="2025-02-13T15:53:13.943932842Z" level=error msg="Failed to destroy network for sandbox \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:13.944545 containerd[1477]: time="2025-02-13T15:53:13.944408697Z" level=error msg="encountered an error cleaning up failed sandbox \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:13.944545 containerd[1477]: time="2025-02-13T15:53:13.944491648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5kpsz,Uid:1fda4028-67fe-488c-814b-ef072254e845,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:13.945745 kubelet[1817]: E0213 15:53:13.945319    1817 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:13.945745 kubelet[1817]: E0213 15:53:13.945505    1817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-5kpsz"
Feb 13 15:53:13.945745 kubelet[1817]: E0213 15:53:13.945583    1817 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-5kpsz"
Feb 13 15:53:13.945922 kubelet[1817]: E0213 15:53:13.945733    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-5kpsz_default(1fda4028-67fe-488c-814b-ef072254e845)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-5kpsz_default(1fda4028-67fe-488c-814b-ef072254e845)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-5kpsz" podUID="1fda4028-67fe-488c-814b-ef072254e845"
Feb 13 15:53:14.395908 kubelet[1817]: E0213 15:53:14.395814    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:14.614001 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6-shm.mount: Deactivated successfully.
Feb 13 15:53:14.615437 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8-shm.mount: Deactivated successfully.
Feb 13 15:53:14.672349 kubelet[1817]: I0213 15:53:14.669467    1817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6"
Feb 13 15:53:14.673408 containerd[1477]: time="2025-02-13T15:53:14.673289708Z" level=info msg="StopPodSandbox for \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\""
Feb 13 15:53:14.675745 containerd[1477]: time="2025-02-13T15:53:14.673711440Z" level=info msg="Ensure that sandbox df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6 in task-service has been cleanup successfully"
Feb 13 15:53:14.675745 containerd[1477]: time="2025-02-13T15:53:14.673966250Z" level=info msg="TearDown network for sandbox \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\" successfully"
Feb 13 15:53:14.675745 containerd[1477]: time="2025-02-13T15:53:14.673990249Z" level=info msg="StopPodSandbox for \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\" returns successfully"
Feb 13 15:53:14.678033 systemd[1]: run-netns-cni\x2d94ee8ce5\x2dee32\x2d0f96\x2ddc17\x2ddbe6dff079dc.mount: Deactivated successfully.
Feb 13 15:53:14.680644 containerd[1477]: time="2025-02-13T15:53:14.680587491Z" level=info msg="StopPodSandbox for \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\""
Feb 13 15:53:14.681043 containerd[1477]: time="2025-02-13T15:53:14.680747148Z" level=info msg="TearDown network for sandbox \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\" successfully"
Feb 13 15:53:14.681043 containerd[1477]: time="2025-02-13T15:53:14.680767824Z" level=info msg="StopPodSandbox for \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\" returns successfully"
Feb 13 15:53:14.681763 containerd[1477]: time="2025-02-13T15:53:14.681696977Z" level=info msg="StopPodSandbox for \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\""
Feb 13 15:53:14.681894 containerd[1477]: time="2025-02-13T15:53:14.681826054Z" level=info msg="TearDown network for sandbox \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\" successfully"
Feb 13 15:53:14.681894 containerd[1477]: time="2025-02-13T15:53:14.681896345Z" level=info msg="StopPodSandbox for \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\" returns successfully"
Feb 13 15:53:14.682491 containerd[1477]: time="2025-02-13T15:53:14.682457872Z" level=info msg="StopPodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\""
Feb 13 15:53:14.682607 containerd[1477]: time="2025-02-13T15:53:14.682581605Z" level=info msg="TearDown network for sandbox \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" successfully"
Feb 13 15:53:14.682607 containerd[1477]: time="2025-02-13T15:53:14.682603134Z" level=info msg="StopPodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" returns successfully"
Feb 13 15:53:14.683299 kubelet[1817]: I0213 15:53:14.683188    1817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8"
Feb 13 15:53:14.683580 containerd[1477]: time="2025-02-13T15:53:14.683507088Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\""
Feb 13 15:53:14.683741 containerd[1477]: time="2025-02-13T15:53:14.683633344Z" level=info msg="TearDown network for sandbox \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" successfully"
Feb 13 15:53:14.683741 containerd[1477]: time="2025-02-13T15:53:14.683653994Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" returns successfully"
Feb 13 15:53:14.684759 containerd[1477]: time="2025-02-13T15:53:14.684726180Z" level=info msg="StopPodSandbox for \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\""
Feb 13 15:53:14.685044 containerd[1477]: time="2025-02-13T15:53:14.685009881Z" level=info msg="Ensure that sandbox d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8 in task-service has been cleanup successfully"
Feb 13 15:53:14.685290 containerd[1477]: time="2025-02-13T15:53:14.685264807Z" level=info msg="TearDown network for sandbox \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\" successfully"
Feb 13 15:53:14.685331 containerd[1477]: time="2025-02-13T15:53:14.685289246Z" level=info msg="StopPodSandbox for \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\" returns successfully"
Feb 13 15:53:14.685426 containerd[1477]: time="2025-02-13T15:53:14.685403809Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\""
Feb 13 15:53:14.685533 containerd[1477]: time="2025-02-13T15:53:14.685514504Z" level=info msg="TearDown network for sandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" successfully"
Feb 13 15:53:14.685533 containerd[1477]: time="2025-02-13T15:53:14.685534176Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" returns successfully"
Feb 13 15:53:14.689978 systemd[1]: run-netns-cni\x2dd8bb3d39\x2d97e6\x2dbf25\x2df8b9\x2d4496f354ee93.mount: Deactivated successfully.
Feb 13 15:53:14.692243 containerd[1477]: time="2025-02-13T15:53:14.691916164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:6,}"
Feb 13 15:53:14.694264 containerd[1477]: time="2025-02-13T15:53:14.692289664Z" level=info msg="StopPodSandbox for \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\""
Feb 13 15:53:14.694264 containerd[1477]: time="2025-02-13T15:53:14.694221612Z" level=info msg="TearDown network for sandbox \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\" successfully"
Feb 13 15:53:14.694264 containerd[1477]: time="2025-02-13T15:53:14.694241027Z" level=info msg="StopPodSandbox for \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\" returns successfully"
Feb 13 15:53:14.695037 containerd[1477]: time="2025-02-13T15:53:14.695004388Z" level=info msg="StopPodSandbox for \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\""
Feb 13 15:53:14.695267 containerd[1477]: time="2025-02-13T15:53:14.695140349Z" level=info msg="TearDown network for sandbox \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\" successfully"
Feb 13 15:53:14.695267 containerd[1477]: time="2025-02-13T15:53:14.695184890Z" level=info msg="StopPodSandbox for \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\" returns successfully"
Feb 13 15:53:14.695267 containerd[1477]: time="2025-02-13T15:53:14.695906112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5kpsz,Uid:1fda4028-67fe-488c-814b-ef072254e845,Namespace:default,Attempt:3,}"
Feb 13 15:53:14.977771 containerd[1477]: time="2025-02-13T15:53:14.977673077Z" level=error msg="Failed to destroy network for sandbox \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:14.979760 containerd[1477]: time="2025-02-13T15:53:14.979491443Z" level=error msg="encountered an error cleaning up failed sandbox \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:14.980414 containerd[1477]: time="2025-02-13T15:53:14.980102195Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:14.981140 kubelet[1817]: E0213 15:53:14.981102    1817 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:14.981317 kubelet[1817]: E0213 15:53:14.981207    1817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:14.981317 kubelet[1817]: E0213 15:53:14.981251    1817 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:14.981770 kubelet[1817]: E0213 15:53:14.981348    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-88c9f" podUID="0f99465c-7630-4cd4-9490-a0c5effdd0c7"
Feb 13 15:53:14.994516 containerd[1477]: time="2025-02-13T15:53:14.994274672Z" level=error msg="Failed to destroy network for sandbox \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:14.996002 containerd[1477]: time="2025-02-13T15:53:14.995769921Z" level=error msg="encountered an error cleaning up failed sandbox \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:14.996002 containerd[1477]: time="2025-02-13T15:53:14.995868178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5kpsz,Uid:1fda4028-67fe-488c-814b-ef072254e845,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:14.996560 kubelet[1817]: E0213 15:53:14.996475    1817 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:14.996618 kubelet[1817]: E0213 15:53:14.996588    1817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-5kpsz"
Feb 13 15:53:14.996660 kubelet[1817]: E0213 15:53:14.996626    1817 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-5kpsz"
Feb 13 15:53:14.996964 kubelet[1817]: E0213 15:53:14.996706    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-5kpsz_default(1fda4028-67fe-488c-814b-ef072254e845)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-5kpsz_default(1fda4028-67fe-488c-814b-ef072254e845)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-5kpsz" podUID="1fda4028-67fe-488c-814b-ef072254e845"
Feb 13 15:53:15.406103 kubelet[1817]: E0213 15:53:15.405282    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:15.614621 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff-shm.mount: Deactivated successfully.
Feb 13 15:53:15.692018 kubelet[1817]: I0213 15:53:15.690590    1817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff"
Feb 13 15:53:15.692185 containerd[1477]: time="2025-02-13T15:53:15.691670703Z" level=info msg="StopPodSandbox for \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\""
Feb 13 15:53:15.693922 containerd[1477]: time="2025-02-13T15:53:15.693858647Z" level=info msg="Ensure that sandbox 219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff in task-service has been cleanup successfully"
Feb 13 15:53:15.697294 systemd[1]: run-netns-cni\x2d41373173\x2dacca\x2d9565\x2d6cd5\x2de41575492b6f.mount: Deactivated successfully.
Feb 13 15:53:15.699603 containerd[1477]: time="2025-02-13T15:53:15.699036027Z" level=info msg="TearDown network for sandbox \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\" successfully"
Feb 13 15:53:15.699603 containerd[1477]: time="2025-02-13T15:53:15.699098943Z" level=info msg="StopPodSandbox for \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\" returns successfully"
Feb 13 15:53:15.702085 containerd[1477]: time="2025-02-13T15:53:15.701533535Z" level=info msg="StopPodSandbox for \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\""
Feb 13 15:53:15.702085 containerd[1477]: time="2025-02-13T15:53:15.701725545Z" level=info msg="TearDown network for sandbox \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\" successfully"
Feb 13 15:53:15.702085 containerd[1477]: time="2025-02-13T15:53:15.701746152Z" level=info msg="StopPodSandbox for \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\" returns successfully"
Feb 13 15:53:15.702665 containerd[1477]: time="2025-02-13T15:53:15.702334412Z" level=info msg="StopPodSandbox for \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\""
Feb 13 15:53:15.702665 containerd[1477]: time="2025-02-13T15:53:15.702472875Z" level=info msg="TearDown network for sandbox \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\" successfully"
Feb 13 15:53:15.702665 containerd[1477]: time="2025-02-13T15:53:15.702499381Z" level=info msg="StopPodSandbox for \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\" returns successfully"
Feb 13 15:53:15.705211 containerd[1477]: time="2025-02-13T15:53:15.704613378Z" level=info msg="StopPodSandbox for \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\""
Feb 13 15:53:15.705211 containerd[1477]: time="2025-02-13T15:53:15.705054480Z" level=info msg="TearDown network for sandbox \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\" successfully"
Feb 13 15:53:15.705211 containerd[1477]: time="2025-02-13T15:53:15.705078889Z" level=info msg="StopPodSandbox for \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\" returns successfully"
Feb 13 15:53:15.706969 containerd[1477]: time="2025-02-13T15:53:15.706616562Z" level=info msg="StopPodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\""
Feb 13 15:53:15.707104 containerd[1477]: time="2025-02-13T15:53:15.706895212Z" level=info msg="TearDown network for sandbox \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" successfully"
Feb 13 15:53:15.707104 containerd[1477]: time="2025-02-13T15:53:15.707028229Z" level=info msg="StopPodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" returns successfully"
Feb 13 15:53:15.707919 containerd[1477]: time="2025-02-13T15:53:15.707818086Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\""
Feb 13 15:53:15.708022 containerd[1477]: time="2025-02-13T15:53:15.707976872Z" level=info msg="TearDown network for sandbox \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" successfully"
Feb 13 15:53:15.708022 containerd[1477]: time="2025-02-13T15:53:15.707994723Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" returns successfully"
Feb 13 15:53:15.709737 containerd[1477]: time="2025-02-13T15:53:15.709513761Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\""
Feb 13 15:53:15.709737 containerd[1477]: time="2025-02-13T15:53:15.709639374Z" level=info msg="TearDown network for sandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" successfully"
Feb 13 15:53:15.709737 containerd[1477]: time="2025-02-13T15:53:15.709656762Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" returns successfully"
Feb 13 15:53:15.712093 containerd[1477]: time="2025-02-13T15:53:15.712003240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:7,}"
Feb 13 15:53:15.715759 kubelet[1817]: I0213 15:53:15.714724    1817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea"
Feb 13 15:53:15.717777 containerd[1477]: time="2025-02-13T15:53:15.717650226Z" level=info msg="StopPodSandbox for \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\""
Feb 13 15:53:15.718115 containerd[1477]: time="2025-02-13T15:53:15.718004367Z" level=info msg="Ensure that sandbox dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea in task-service has been cleanup successfully"
Feb 13 15:53:15.721691 containerd[1477]: time="2025-02-13T15:53:15.721480365Z" level=info msg="TearDown network for sandbox \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\" successfully"
Feb 13 15:53:15.721691 containerd[1477]: time="2025-02-13T15:53:15.721540015Z" level=info msg="StopPodSandbox for \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\" returns successfully"
Feb 13 15:53:15.722714 systemd[1]: run-netns-cni\x2d05df9238\x2dad51\x2db6ec\x2d65c8\x2dee972e5023b6.mount: Deactivated successfully.
Feb 13 15:53:15.730459 containerd[1477]: time="2025-02-13T15:53:15.729688477Z" level=info msg="StopPodSandbox for \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\""
Feb 13 15:53:15.730459 containerd[1477]: time="2025-02-13T15:53:15.729863571Z" level=info msg="TearDown network for sandbox \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\" successfully"
Feb 13 15:53:15.730459 containerd[1477]: time="2025-02-13T15:53:15.729882178Z" level=info msg="StopPodSandbox for \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\" returns successfully"
Feb 13 15:53:15.730788 containerd[1477]: time="2025-02-13T15:53:15.730701617Z" level=info msg="StopPodSandbox for \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\""
Feb 13 15:53:15.731360 containerd[1477]: time="2025-02-13T15:53:15.730882741Z" level=info msg="TearDown network for sandbox \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\" successfully"
Feb 13 15:53:15.731360 containerd[1477]: time="2025-02-13T15:53:15.730910488Z" level=info msg="StopPodSandbox for \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\" returns successfully"
Feb 13 15:53:15.732699 containerd[1477]: time="2025-02-13T15:53:15.732645580Z" level=info msg="StopPodSandbox for \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\""
Feb 13 15:53:15.733197 containerd[1477]: time="2025-02-13T15:53:15.733039299Z" level=info msg="TearDown network for sandbox \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\" successfully"
Feb 13 15:53:15.733197 containerd[1477]: time="2025-02-13T15:53:15.733067489Z" level=info msg="StopPodSandbox for \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\" returns successfully"
Feb 13 15:53:15.742777 containerd[1477]: time="2025-02-13T15:53:15.742683357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5kpsz,Uid:1fda4028-67fe-488c-814b-ef072254e845,Namespace:default,Attempt:4,}"
Feb 13 15:53:15.920031 containerd[1477]: time="2025-02-13T15:53:15.919895542Z" level=error msg="Failed to destroy network for sandbox \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:15.921805 containerd[1477]: time="2025-02-13T15:53:15.921572457Z" level=error msg="encountered an error cleaning up failed sandbox \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:15.921805 containerd[1477]: time="2025-02-13T15:53:15.921697290Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:15.922622 kubelet[1817]: E0213 15:53:15.922065    1817 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:15.923077 kubelet[1817]: E0213 15:53:15.922867    1817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:15.923077 kubelet[1817]: E0213 15:53:15.922964    1817 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:15.924684 kubelet[1817]: E0213 15:53:15.924312    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-88c9f" podUID="0f99465c-7630-4cd4-9490-a0c5effdd0c7"
Feb 13 15:53:15.946263 containerd[1477]: time="2025-02-13T15:53:15.943903307Z" level=error msg="Failed to destroy network for sandbox \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:15.946263 containerd[1477]: time="2025-02-13T15:53:15.944410087Z" level=error msg="encountered an error cleaning up failed sandbox \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:15.946263 containerd[1477]: time="2025-02-13T15:53:15.944498730Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5kpsz,Uid:1fda4028-67fe-488c-814b-ef072254e845,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:15.946607 kubelet[1817]: E0213 15:53:15.946207    1817 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:15.946607 kubelet[1817]: E0213 15:53:15.946287    1817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-5kpsz"
Feb 13 15:53:15.946607 kubelet[1817]: E0213 15:53:15.946323    1817 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-5kpsz"
Feb 13 15:53:15.946771 kubelet[1817]: E0213 15:53:15.946404    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-5kpsz_default(1fda4028-67fe-488c-814b-ef072254e845)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-5kpsz_default(1fda4028-67fe-488c-814b-ef072254e845)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-5kpsz" podUID="1fda4028-67fe-488c-814b-ef072254e845"
Feb 13 15:53:16.406483 kubelet[1817]: E0213 15:53:16.406421    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:16.615095 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208-shm.mount: Deactivated successfully.
Feb 13 15:53:16.723350 kubelet[1817]: I0213 15:53:16.722428    1817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208"
Feb 13 15:53:16.723780 containerd[1477]: time="2025-02-13T15:53:16.723734300Z" level=info msg="StopPodSandbox for \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\""
Feb 13 15:53:16.725124 containerd[1477]: time="2025-02-13T15:53:16.725052540Z" level=info msg="Ensure that sandbox c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208 in task-service has been cleanup successfully"
Feb 13 15:53:16.730876 containerd[1477]: time="2025-02-13T15:53:16.726933016Z" level=info msg="TearDown network for sandbox \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\" successfully"
Feb 13 15:53:16.730876 containerd[1477]: time="2025-02-13T15:53:16.726989107Z" level=info msg="StopPodSandbox for \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\" returns successfully"
Feb 13 15:53:16.734302 containerd[1477]: time="2025-02-13T15:53:16.732129496Z" level=info msg="StopPodSandbox for \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\""
Feb 13 15:53:16.734302 containerd[1477]: time="2025-02-13T15:53:16.732407998Z" level=info msg="TearDown network for sandbox \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\" successfully"
Feb 13 15:53:16.734302 containerd[1477]: time="2025-02-13T15:53:16.732456204Z" level=info msg="StopPodSandbox for \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\" returns successfully"
Feb 13 15:53:16.734302 containerd[1477]: time="2025-02-13T15:53:16.733434016Z" level=info msg="StopPodSandbox for \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\""
Feb 13 15:53:16.734302 containerd[1477]: time="2025-02-13T15:53:16.733678605Z" level=info msg="TearDown network for sandbox \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\" successfully"
Feb 13 15:53:16.733359 systemd[1]: run-netns-cni\x2d2f4f482d\x2d5c8c\x2dd9f7\x2d03cc\x2dcb132ba7f71e.mount: Deactivated successfully.
Feb 13 15:53:16.736994 containerd[1477]: time="2025-02-13T15:53:16.734364981Z" level=info msg="StopPodSandbox for \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\" returns successfully"
Feb 13 15:53:16.737754 containerd[1477]: time="2025-02-13T15:53:16.737354913Z" level=info msg="StopPodSandbox for \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\""
Feb 13 15:53:16.737754 containerd[1477]: time="2025-02-13T15:53:16.737524440Z" level=info msg="TearDown network for sandbox \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\" successfully"
Feb 13 15:53:16.737754 containerd[1477]: time="2025-02-13T15:53:16.737543207Z" level=info msg="StopPodSandbox for \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\" returns successfully"
Feb 13 15:53:16.739785 containerd[1477]: time="2025-02-13T15:53:16.739114842Z" level=info msg="StopPodSandbox for \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\""
Feb 13 15:53:16.740360 containerd[1477]: time="2025-02-13T15:53:16.740052788Z" level=info msg="TearDown network for sandbox \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\" successfully"
Feb 13 15:53:16.740618 containerd[1477]: time="2025-02-13T15:53:16.740584417Z" level=info msg="StopPodSandbox for \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\" returns successfully"
Feb 13 15:53:16.741454 kubelet[1817]: I0213 15:53:16.741411    1817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3"
Feb 13 15:53:16.741761 containerd[1477]: time="2025-02-13T15:53:16.741732060Z" level=info msg="StopPodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\""
Feb 13 15:53:16.741939 containerd[1477]: time="2025-02-13T15:53:16.741915999Z" level=info msg="TearDown network for sandbox \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" successfully"
Feb 13 15:53:16.742354 containerd[1477]: time="2025-02-13T15:53:16.742052043Z" level=info msg="StopPodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" returns successfully"
Feb 13 15:53:16.742860 containerd[1477]: time="2025-02-13T15:53:16.742828159Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\""
Feb 13 15:53:16.743078 containerd[1477]: time="2025-02-13T15:53:16.743056493Z" level=info msg="TearDown network for sandbox \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" successfully"
Feb 13 15:53:16.743187 containerd[1477]: time="2025-02-13T15:53:16.743172799Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" returns successfully"
Feb 13 15:53:16.743327 containerd[1477]: time="2025-02-13T15:53:16.743312213Z" level=info msg="StopPodSandbox for \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\""
Feb 13 15:53:16.743830 containerd[1477]: time="2025-02-13T15:53:16.743796839Z" level=info msg="Ensure that sandbox 2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3 in task-service has been cleanup successfully"
Feb 13 15:53:16.746486 containerd[1477]: time="2025-02-13T15:53:16.746434544Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\""
Feb 13 15:53:16.747321 systemd[1]: run-netns-cni\x2dbe6f95d5\x2d8a56\x2d77a0\x2dded6\x2dc74499f9e421.mount: Deactivated successfully.
Feb 13 15:53:16.749018 containerd[1477]: time="2025-02-13T15:53:16.747601542Z" level=info msg="TearDown network for sandbox \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\" successfully"
Feb 13 15:53:16.749018 containerd[1477]: time="2025-02-13T15:53:16.748319504Z" level=info msg="StopPodSandbox for \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\" returns successfully"
Feb 13 15:53:16.750735 containerd[1477]: time="2025-02-13T15:53:16.750019319Z" level=info msg="TearDown network for sandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" successfully"
Feb 13 15:53:16.750735 containerd[1477]: time="2025-02-13T15:53:16.750261560Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" returns successfully"
Feb 13 15:53:16.750735 containerd[1477]: time="2025-02-13T15:53:16.750501078Z" level=info msg="StopPodSandbox for \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\""
Feb 13 15:53:16.750735 containerd[1477]: time="2025-02-13T15:53:16.750627171Z" level=info msg="TearDown network for sandbox \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\" successfully"
Feb 13 15:53:16.750735 containerd[1477]: time="2025-02-13T15:53:16.750641057Z" level=info msg="StopPodSandbox for \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\" returns successfully"
Feb 13 15:53:16.754191 containerd[1477]: time="2025-02-13T15:53:16.753951882Z" level=info msg="StopPodSandbox for \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\""
Feb 13 15:53:16.754191 containerd[1477]: time="2025-02-13T15:53:16.754095433Z" level=info msg="TearDown network for sandbox \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\" successfully"
Feb 13 15:53:16.754191 containerd[1477]: time="2025-02-13T15:53:16.754108370Z" level=info msg="StopPodSandbox for \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\" returns successfully"
Feb 13 15:53:16.754607 containerd[1477]: time="2025-02-13T15:53:16.754514420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:8,}"
Feb 13 15:53:16.754820 containerd[1477]: time="2025-02-13T15:53:16.754798259Z" level=info msg="StopPodSandbox for \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\""
Feb 13 15:53:16.755096 containerd[1477]: time="2025-02-13T15:53:16.755068104Z" level=info msg="TearDown network for sandbox \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\" successfully"
Feb 13 15:53:16.755196 containerd[1477]: time="2025-02-13T15:53:16.755174803Z" level=info msg="StopPodSandbox for \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\" returns successfully"
Feb 13 15:53:16.755710 containerd[1477]: time="2025-02-13T15:53:16.755683906Z" level=info msg="StopPodSandbox for \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\""
Feb 13 15:53:16.756034 containerd[1477]: time="2025-02-13T15:53:16.756010647Z" level=info msg="TearDown network for sandbox \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\" successfully"
Feb 13 15:53:16.757391 containerd[1477]: time="2025-02-13T15:53:16.757354579Z" level=info msg="StopPodSandbox for \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\" returns successfully"
Feb 13 15:53:16.760329 containerd[1477]: time="2025-02-13T15:53:16.760233711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5kpsz,Uid:1fda4028-67fe-488c-814b-ef072254e845,Namespace:default,Attempt:5,}"
Feb 13 15:53:16.979912 containerd[1477]: time="2025-02-13T15:53:16.979644037Z" level=error msg="Failed to destroy network for sandbox \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:16.982106 containerd[1477]: time="2025-02-13T15:53:16.981731306Z" level=error msg="encountered an error cleaning up failed sandbox \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:16.982106 containerd[1477]: time="2025-02-13T15:53:16.981859469Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5kpsz,Uid:1fda4028-67fe-488c-814b-ef072254e845,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:16.982403 kubelet[1817]: E0213 15:53:16.982326    1817 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:16.982486 kubelet[1817]: E0213 15:53:16.982432    1817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-5kpsz"
Feb 13 15:53:16.982486 kubelet[1817]: E0213 15:53:16.982459    1817 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-5kpsz"
Feb 13 15:53:16.982915 kubelet[1817]: E0213 15:53:16.982561    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-5kpsz_default(1fda4028-67fe-488c-814b-ef072254e845)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-5kpsz_default(1fda4028-67fe-488c-814b-ef072254e845)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-5kpsz" podUID="1fda4028-67fe-488c-814b-ef072254e845"
Feb 13 15:53:16.992434 containerd[1477]: time="2025-02-13T15:53:16.992371224Z" level=error msg="Failed to destroy network for sandbox \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:16.993811 containerd[1477]: time="2025-02-13T15:53:16.993316977Z" level=error msg="encountered an error cleaning up failed sandbox \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:16.993811 containerd[1477]: time="2025-02-13T15:53:16.993434609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:16.994069 kubelet[1817]: E0213 15:53:16.993788    1817 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:16.994069 kubelet[1817]: E0213 15:53:16.993864    1817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:16.994069 kubelet[1817]: E0213 15:53:16.993897    1817 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:16.994336 kubelet[1817]: E0213 15:53:16.993976    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-88c9f" podUID="0f99465c-7630-4cd4-9490-a0c5effdd0c7"
Feb 13 15:53:17.411074 kubelet[1817]: E0213 15:53:17.410384    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:17.617814 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d-shm.mount: Deactivated successfully.
Feb 13 15:53:17.754100 kubelet[1817]: I0213 15:53:17.754035    1817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d"
Feb 13 15:53:17.755431 containerd[1477]: time="2025-02-13T15:53:17.754932644Z" level=info msg="StopPodSandbox for \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\""
Feb 13 15:53:17.755431 containerd[1477]: time="2025-02-13T15:53:17.755275840Z" level=info msg="Ensure that sandbox 0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d in task-service has been cleanup successfully"
Feb 13 15:53:17.756488 containerd[1477]: time="2025-02-13T15:53:17.756447644Z" level=info msg="TearDown network for sandbox \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\" successfully"
Feb 13 15:53:17.756940 containerd[1477]: time="2025-02-13T15:53:17.756623403Z" level=info msg="StopPodSandbox for \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\" returns successfully"
Feb 13 15:53:17.759074 containerd[1477]: time="2025-02-13T15:53:17.759032120Z" level=info msg="StopPodSandbox for \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\""
Feb 13 15:53:17.760027 containerd[1477]: time="2025-02-13T15:53:17.759993198Z" level=info msg="TearDown network for sandbox \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\" successfully"
Feb 13 15:53:17.760421 containerd[1477]: time="2025-02-13T15:53:17.760127300Z" level=info msg="StopPodSandbox for \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\" returns successfully"
Feb 13 15:53:17.761496 systemd[1]: run-netns-cni\x2d1da392be\x2d2ad7\x2d7e65\x2d4e44\x2df0dca4597b7c.mount: Deactivated successfully.
Feb 13 15:53:17.764437 containerd[1477]: time="2025-02-13T15:53:17.764013856Z" level=info msg="StopPodSandbox for \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\""
Feb 13 15:53:17.764437 containerd[1477]: time="2025-02-13T15:53:17.764232722Z" level=info msg="TearDown network for sandbox \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\" successfully"
Feb 13 15:53:17.764437 containerd[1477]: time="2025-02-13T15:53:17.764253119Z" level=info msg="StopPodSandbox for \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\" returns successfully"
Feb 13 15:53:17.765508 containerd[1477]: time="2025-02-13T15:53:17.765470171Z" level=info msg="StopPodSandbox for \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\""
Feb 13 15:53:17.765771 containerd[1477]: time="2025-02-13T15:53:17.765746984Z" level=info msg="TearDown network for sandbox \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\" successfully"
Feb 13 15:53:17.765879 containerd[1477]: time="2025-02-13T15:53:17.765859897Z" level=info msg="StopPodSandbox for \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\" returns successfully"
Feb 13 15:53:17.767331 containerd[1477]: time="2025-02-13T15:53:17.767193053Z" level=info msg="StopPodSandbox for \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\""
Feb 13 15:53:17.768172 containerd[1477]: time="2025-02-13T15:53:17.767967200Z" level=info msg="TearDown network for sandbox \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\" successfully"
Feb 13 15:53:17.768172 containerd[1477]: time="2025-02-13T15:53:17.768104622Z" level=info msg="StopPodSandbox for \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\" returns successfully"
Feb 13 15:53:17.772607 containerd[1477]: time="2025-02-13T15:53:17.772543803Z" level=info msg="StopPodSandbox for \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\""
Feb 13 15:53:17.772808 containerd[1477]: time="2025-02-13T15:53:17.772710477Z" level=info msg="TearDown network for sandbox \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\" successfully"
Feb 13 15:53:17.772808 containerd[1477]: time="2025-02-13T15:53:17.772779369Z" level=info msg="StopPodSandbox for \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\" returns successfully"
Feb 13 15:53:17.775340 containerd[1477]: time="2025-02-13T15:53:17.775016791Z" level=info msg="StopPodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\""
Feb 13 15:53:17.775340 containerd[1477]: time="2025-02-13T15:53:17.775129552Z" level=info msg="TearDown network for sandbox \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" successfully"
Feb 13 15:53:17.775624 containerd[1477]: time="2025-02-13T15:53:17.775140982Z" level=info msg="StopPodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" returns successfully"
Feb 13 15:53:17.776740 containerd[1477]: time="2025-02-13T15:53:17.776676570Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\""
Feb 13 15:53:17.776740 containerd[1477]: time="2025-02-13T15:53:17.776813548Z" level=info msg="TearDown network for sandbox \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" successfully"
Feb 13 15:53:17.776740 containerd[1477]: time="2025-02-13T15:53:17.776826015Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" returns successfully"
Feb 13 15:53:17.777264 kubelet[1817]: I0213 15:53:17.777121    1817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00"
Feb 13 15:53:17.778350 containerd[1477]: time="2025-02-13T15:53:17.778311331Z" level=info msg="StopPodSandbox for \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\""
Feb 13 15:53:17.778976 containerd[1477]: time="2025-02-13T15:53:17.778830731Z" level=info msg="Ensure that sandbox 80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00 in task-service has been cleanup successfully"
Feb 13 15:53:17.779346 containerd[1477]: time="2025-02-13T15:53:17.779270653Z" level=info msg="TearDown network for sandbox \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\" successfully"
Feb 13 15:53:17.779346 containerd[1477]: time="2025-02-13T15:53:17.779300795Z" level=info msg="StopPodSandbox for \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\" returns successfully"
Feb 13 15:53:17.779786 containerd[1477]: time="2025-02-13T15:53:17.779626576Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\""
Feb 13 15:53:17.783230 containerd[1477]: time="2025-02-13T15:53:17.783008826Z" level=info msg="TearDown network for sandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" successfully"
Feb 13 15:53:17.783230 containerd[1477]: time="2025-02-13T15:53:17.783076826Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" returns successfully"
Feb 13 15:53:17.784389 containerd[1477]: time="2025-02-13T15:53:17.784236033Z" level=info msg="StopPodSandbox for \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\""
Feb 13 15:53:17.784658 systemd[1]: run-netns-cni\x2d3a8cf564\x2dce62\x2df661\x2d248c\x2dcba6bde1085a.mount: Deactivated successfully.
Feb 13 15:53:17.786444 containerd[1477]: time="2025-02-13T15:53:17.784737417Z" level=info msg="TearDown network for sandbox \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\" successfully"
Feb 13 15:53:17.786444 containerd[1477]: time="2025-02-13T15:53:17.786370807Z" level=info msg="StopPodSandbox for \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\" returns successfully"
Feb 13 15:53:17.787451 containerd[1477]: time="2025-02-13T15:53:17.787087568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:9,}"
Feb 13 15:53:17.789165 containerd[1477]: time="2025-02-13T15:53:17.788847847Z" level=info msg="StopPodSandbox for \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\""
Feb 13 15:53:17.789165 containerd[1477]: time="2025-02-13T15:53:17.789015945Z" level=info msg="TearDown network for sandbox \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\" successfully"
Feb 13 15:53:17.789165 containerd[1477]: time="2025-02-13T15:53:17.789038844Z" level=info msg="StopPodSandbox for \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\" returns successfully"
Feb 13 15:53:17.790042 containerd[1477]: time="2025-02-13T15:53:17.790004614Z" level=info msg="StopPodSandbox for \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\""
Feb 13 15:53:17.790590 containerd[1477]: time="2025-02-13T15:53:17.790473985Z" level=info msg="TearDown network for sandbox \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\" successfully"
Feb 13 15:53:17.790590 containerd[1477]: time="2025-02-13T15:53:17.790504631Z" level=info msg="StopPodSandbox for \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\" returns successfully"
Feb 13 15:53:17.792649 containerd[1477]: time="2025-02-13T15:53:17.792375537Z" level=info msg="StopPodSandbox for \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\""
Feb 13 15:53:17.792649 containerd[1477]: time="2025-02-13T15:53:17.792518973Z" level=info msg="TearDown network for sandbox \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\" successfully"
Feb 13 15:53:17.792649 containerd[1477]: time="2025-02-13T15:53:17.792539105Z" level=info msg="StopPodSandbox for \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\" returns successfully"
Feb 13 15:53:17.793611 containerd[1477]: time="2025-02-13T15:53:17.793572040Z" level=info msg="StopPodSandbox for \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\""
Feb 13 15:53:17.793793 containerd[1477]: time="2025-02-13T15:53:17.793708734Z" level=info msg="TearDown network for sandbox \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\" successfully"
Feb 13 15:53:17.793793 containerd[1477]: time="2025-02-13T15:53:17.793728616Z" level=info msg="StopPodSandbox for \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\" returns successfully"
Feb 13 15:53:17.796883 containerd[1477]: time="2025-02-13T15:53:17.796499274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5kpsz,Uid:1fda4028-67fe-488c-814b-ef072254e845,Namespace:default,Attempt:6,}"
Feb 13 15:53:18.202749 containerd[1477]: time="2025-02-13T15:53:18.202444173Z" level=error msg="Failed to destroy network for sandbox \"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:18.205454 containerd[1477]: time="2025-02-13T15:53:18.204509438Z" level=error msg="encountered an error cleaning up failed sandbox \"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:18.205454 containerd[1477]: time="2025-02-13T15:53:18.204626766Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:18.205795 kubelet[1817]: E0213 15:53:18.205014    1817 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:18.205795 kubelet[1817]: E0213 15:53:18.205236    1817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:18.205795 kubelet[1817]: E0213 15:53:18.205269    1817 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:18.205962 kubelet[1817]: E0213 15:53:18.205353    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-88c9f" podUID="0f99465c-7630-4cd4-9490-a0c5effdd0c7"
Feb 13 15:53:18.277949 containerd[1477]: time="2025-02-13T15:53:18.277576924Z" level=error msg="Failed to destroy network for sandbox \"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:18.279768 containerd[1477]: time="2025-02-13T15:53:18.278729189Z" level=error msg="encountered an error cleaning up failed sandbox \"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:18.279768 containerd[1477]: time="2025-02-13T15:53:18.278843068Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5kpsz,Uid:1fda4028-67fe-488c-814b-ef072254e845,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:18.280076 kubelet[1817]: E0213 15:53:18.279181    1817 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:18.280076 kubelet[1817]: E0213 15:53:18.279245    1817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-5kpsz"
Feb 13 15:53:18.280076 kubelet[1817]: E0213 15:53:18.279275    1817 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-5kpsz"
Feb 13 15:53:18.280250 kubelet[1817]: E0213 15:53:18.279360    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-5kpsz_default(1fda4028-67fe-488c-814b-ef072254e845)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-5kpsz_default(1fda4028-67fe-488c-814b-ef072254e845)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-5kpsz" podUID="1fda4028-67fe-488c-814b-ef072254e845"
Feb 13 15:53:18.412201 kubelet[1817]: E0213 15:53:18.410738    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:18.535137 systemd[1]: Started sshd@10-143.110.144.28:22-218.92.0.157:20651.service - OpenSSH per-connection server daemon (218.92.0.157:20651).
Feb 13 15:53:18.616010 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469-shm.mount: Deactivated successfully.
Feb 13 15:53:18.616205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2540501767.mount: Deactivated successfully.
Feb 13 15:53:18.652491 containerd[1477]: time="2025-02-13T15:53:18.652087806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:53:18.657953 containerd[1477]: time="2025-02-13T15:53:18.657700218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010"
Feb 13 15:53:18.664211 containerd[1477]: time="2025-02-13T15:53:18.663349420Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:53:18.667083 containerd[1477]: time="2025-02-13T15:53:18.666965833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:53:18.668720 containerd[1477]: time="2025-02-13T15:53:18.667892123Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.075410664s"
Feb 13 15:53:18.668720 containerd[1477]: time="2025-02-13T15:53:18.667950194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\""
Feb 13 15:53:18.692019 containerd[1477]: time="2025-02-13T15:53:18.691954994Z" level=info msg="CreateContainer within sandbox \"a9ff8392c03c085cf9cb1cf7d2cde3568b539ea509aca09f97f70ac291294e75\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}"
Feb 13 15:53:18.729483 containerd[1477]: time="2025-02-13T15:53:18.728929017Z" level=info msg="CreateContainer within sandbox \"a9ff8392c03c085cf9cb1cf7d2cde3568b539ea509aca09f97f70ac291294e75\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e62dd8ff2310d85dfec294ecd0653d4258d491e4082bfaf28d7e11a3da1c3d09\""
Feb 13 15:53:18.730653 containerd[1477]: time="2025-02-13T15:53:18.730558781Z" level=info msg="StartContainer for \"e62dd8ff2310d85dfec294ecd0653d4258d491e4082bfaf28d7e11a3da1c3d09\""
Feb 13 15:53:18.797416 kubelet[1817]: I0213 15:53:18.796789    1817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469"
Feb 13 15:53:18.803321 containerd[1477]: time="2025-02-13T15:53:18.803262153Z" level=info msg="StopPodSandbox for \"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\""
Feb 13 15:53:18.805536 containerd[1477]: time="2025-02-13T15:53:18.803585102Z" level=info msg="Ensure that sandbox 6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469 in task-service has been cleanup successfully"
Feb 13 15:53:18.806028 containerd[1477]: time="2025-02-13T15:53:18.805630883Z" level=info msg="TearDown network for sandbox \"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\" successfully"
Feb 13 15:53:18.806028 containerd[1477]: time="2025-02-13T15:53:18.805703278Z" level=info msg="StopPodSandbox for \"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\" returns successfully"
Feb 13 15:53:18.806904 containerd[1477]: time="2025-02-13T15:53:18.806855538Z" level=info msg="StopPodSandbox for \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\""
Feb 13 15:53:18.807099 containerd[1477]: time="2025-02-13T15:53:18.806997218Z" level=info msg="TearDown network for sandbox \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\" successfully"
Feb 13 15:53:18.807099 containerd[1477]: time="2025-02-13T15:53:18.807021461Z" level=info msg="StopPodSandbox for \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\" returns successfully"
Feb 13 15:53:18.807720 kubelet[1817]: I0213 15:53:18.807669    1817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139"
Feb 13 15:53:18.809464 containerd[1477]: time="2025-02-13T15:53:18.809406742Z" level=info msg="StopPodSandbox for \"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\""
Feb 13 15:53:18.810196 containerd[1477]: time="2025-02-13T15:53:18.810133686Z" level=info msg="StopPodSandbox for \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\""
Feb 13 15:53:18.810322 containerd[1477]: time="2025-02-13T15:53:18.810299107Z" level=info msg="TearDown network for sandbox \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\" successfully"
Feb 13 15:53:18.810521 containerd[1477]: time="2025-02-13T15:53:18.810323800Z" level=info msg="StopPodSandbox for \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\" returns successfully"
Feb 13 15:53:18.810819 containerd[1477]: time="2025-02-13T15:53:18.810787318Z" level=info msg="StopPodSandbox for \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\""
Feb 13 15:53:18.810941 containerd[1477]: time="2025-02-13T15:53:18.810917208Z" level=info msg="TearDown network for sandbox \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\" successfully"
Feb 13 15:53:18.811007 containerd[1477]: time="2025-02-13T15:53:18.810941047Z" level=info msg="StopPodSandbox for \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\" returns successfully"
Feb 13 15:53:18.811410 containerd[1477]: time="2025-02-13T15:53:18.811372340Z" level=info msg="Ensure that sandbox 9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139 in task-service has been cleanup successfully"
Feb 13 15:53:18.813567 containerd[1477]: time="2025-02-13T15:53:18.813527950Z" level=info msg="StopPodSandbox for \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\""
Feb 13 15:53:18.813707 containerd[1477]: time="2025-02-13T15:53:18.813674632Z" level=info msg="TearDown network for sandbox \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\" successfully"
Feb 13 15:53:18.813707 containerd[1477]: time="2025-02-13T15:53:18.813692198Z" level=info msg="StopPodSandbox for \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\" returns successfully"
Feb 13 15:53:18.814081 containerd[1477]: time="2025-02-13T15:53:18.813831034Z" level=info msg="TearDown network for sandbox \"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\" successfully"
Feb 13 15:53:18.814081 containerd[1477]: time="2025-02-13T15:53:18.813849518Z" level=info msg="StopPodSandbox for \"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\" returns successfully"
Feb 13 15:53:18.814322 containerd[1477]: time="2025-02-13T15:53:18.814303275Z" level=info msg="StopPodSandbox for \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\""
Feb 13 15:53:18.814446 containerd[1477]: time="2025-02-13T15:53:18.814410502Z" level=info msg="TearDown network for sandbox \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\" successfully"
Feb 13 15:53:18.814446 containerd[1477]: time="2025-02-13T15:53:18.814433491Z" level=info msg="StopPodSandbox for \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\" returns successfully"
Feb 13 15:53:18.814539 containerd[1477]: time="2025-02-13T15:53:18.814511787Z" level=info msg="StopPodSandbox for \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\""
Feb 13 15:53:18.814611 containerd[1477]: time="2025-02-13T15:53:18.814590993Z" level=info msg="TearDown network for sandbox \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\" successfully"
Feb 13 15:53:18.814685 containerd[1477]: time="2025-02-13T15:53:18.814611274Z" level=info msg="StopPodSandbox for \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\" returns successfully"
Feb 13 15:53:18.815246 containerd[1477]: time="2025-02-13T15:53:18.815019158Z" level=info msg="StopPodSandbox for \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\""
Feb 13 15:53:18.815246 containerd[1477]: time="2025-02-13T15:53:18.815144896Z" level=info msg="TearDown network for sandbox \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\" successfully"
Feb 13 15:53:18.815395 containerd[1477]: time="2025-02-13T15:53:18.815224451Z" level=info msg="StopPodSandbox for \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\""
Feb 13 15:53:18.815497 containerd[1477]: time="2025-02-13T15:53:18.815473800Z" level=info msg="TearDown network for sandbox \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\" successfully"
Feb 13 15:53:18.815562 containerd[1477]: time="2025-02-13T15:53:18.815498407Z" level=info msg="StopPodSandbox for \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\" returns successfully"
Feb 13 15:53:18.815675 containerd[1477]: time="2025-02-13T15:53:18.815649757Z" level=info msg="StopPodSandbox for \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\" returns successfully"
Feb 13 15:53:18.816141 containerd[1477]: time="2025-02-13T15:53:18.816104854Z" level=info msg="StopPodSandbox for \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\""
Feb 13 15:53:18.816270 containerd[1477]: time="2025-02-13T15:53:18.816246020Z" level=info msg="TearDown network for sandbox \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\" successfully"
Feb 13 15:53:18.816334 containerd[1477]: time="2025-02-13T15:53:18.816271081Z" level=info msg="StopPodSandbox for \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\" returns successfully"
Feb 13 15:53:18.816334 containerd[1477]: time="2025-02-13T15:53:18.816123007Z" level=info msg="StopPodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\""
Feb 13 15:53:18.816402 containerd[1477]: time="2025-02-13T15:53:18.816388236Z" level=info msg="TearDown network for sandbox \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" successfully"
Feb 13 15:53:18.816426 containerd[1477]: time="2025-02-13T15:53:18.816403322Z" level=info msg="StopPodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" returns successfully"
Feb 13 15:53:18.817272 containerd[1477]: time="2025-02-13T15:53:18.816749370Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\""
Feb 13 15:53:18.817272 containerd[1477]: time="2025-02-13T15:53:18.816872795Z" level=info msg="TearDown network for sandbox \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" successfully"
Feb 13 15:53:18.817272 containerd[1477]: time="2025-02-13T15:53:18.816890153Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" returns successfully"
Feb 13 15:53:18.820526 containerd[1477]: time="2025-02-13T15:53:18.820461347Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\""
Feb 13 15:53:18.820680 containerd[1477]: time="2025-02-13T15:53:18.820640461Z" level=info msg="TearDown network for sandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" successfully"
Feb 13 15:53:18.820680 containerd[1477]: time="2025-02-13T15:53:18.820660175Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" returns successfully"
Feb 13 15:53:18.820780 containerd[1477]: time="2025-02-13T15:53:18.820767270Z" level=info msg="StopPodSandbox for \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\""
Feb 13 15:53:18.820880 containerd[1477]: time="2025-02-13T15:53:18.820855898Z" level=info msg="TearDown network for sandbox \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\" successfully"
Feb 13 15:53:18.820926 containerd[1477]: time="2025-02-13T15:53:18.820879113Z" level=info msg="StopPodSandbox for \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\" returns successfully"
Feb 13 15:53:18.822527 containerd[1477]: time="2025-02-13T15:53:18.822051513Z" level=info msg="StopPodSandbox for \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\""
Feb 13 15:53:18.822527 containerd[1477]: time="2025-02-13T15:53:18.822135025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:10,}"
Feb 13 15:53:18.822527 containerd[1477]: time="2025-02-13T15:53:18.822221426Z" level=info msg="TearDown network for sandbox \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\" successfully"
Feb 13 15:53:18.822527 containerd[1477]: time="2025-02-13T15:53:18.822241672Z" level=info msg="StopPodSandbox for \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\" returns successfully"
Feb 13 15:53:18.823983 containerd[1477]: time="2025-02-13T15:53:18.823935377Z" level=info msg="StopPodSandbox for \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\""
Feb 13 15:53:18.824112 containerd[1477]: time="2025-02-13T15:53:18.824074946Z" level=info msg="TearDown network for sandbox \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\" successfully"
Feb 13 15:53:18.824112 containerd[1477]: time="2025-02-13T15:53:18.824092608Z" level=info msg="StopPodSandbox for \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\" returns successfully"
Feb 13 15:53:18.827280 containerd[1477]: time="2025-02-13T15:53:18.826866593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5kpsz,Uid:1fda4028-67fe-488c-814b-ef072254e845,Namespace:default,Attempt:7,}"
Feb 13 15:53:18.890653 systemd[1]: Started cri-containerd-e62dd8ff2310d85dfec294ecd0653d4258d491e4082bfaf28d7e11a3da1c3d09.scope - libcontainer container e62dd8ff2310d85dfec294ecd0653d4258d491e4082bfaf28d7e11a3da1c3d09.
Feb 13 15:53:18.982215 containerd[1477]: time="2025-02-13T15:53:18.980711607Z" level=info msg="StartContainer for \"e62dd8ff2310d85dfec294ecd0653d4258d491e4082bfaf28d7e11a3da1c3d09\" returns successfully"
Feb 13 15:53:19.044603 containerd[1477]: time="2025-02-13T15:53:19.044509980Z" level=error msg="Failed to destroy network for sandbox \"c06a9e5cbcae0f61af3681b324edb8645c0b75a5ebe4244d0fa985ae54cf822d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:19.045963 containerd[1477]: time="2025-02-13T15:53:19.045462386Z" level=error msg="encountered an error cleaning up failed sandbox \"c06a9e5cbcae0f61af3681b324edb8645c0b75a5ebe4244d0fa985ae54cf822d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:19.045963 containerd[1477]: time="2025-02-13T15:53:19.045616257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:10,} failed, error" error="failed to setup network for sandbox \"c06a9e5cbcae0f61af3681b324edb8645c0b75a5ebe4244d0fa985ae54cf822d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:19.046584 kubelet[1817]: E0213 15:53:19.046375    1817 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c06a9e5cbcae0f61af3681b324edb8645c0b75a5ebe4244d0fa985ae54cf822d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:19.046584 kubelet[1817]: E0213 15:53:19.046462    1817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c06a9e5cbcae0f61af3681b324edb8645c0b75a5ebe4244d0fa985ae54cf822d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:19.046584 kubelet[1817]: E0213 15:53:19.046500    1817 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c06a9e5cbcae0f61af3681b324edb8645c0b75a5ebe4244d0fa985ae54cf822d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88c9f"
Feb 13 15:53:19.046813 kubelet[1817]: E0213 15:53:19.046594    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-88c9f_calico-system(0f99465c-7630-4cd4-9490-a0c5effdd0c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c06a9e5cbcae0f61af3681b324edb8645c0b75a5ebe4244d0fa985ae54cf822d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-88c9f" podUID="0f99465c-7630-4cd4-9490-a0c5effdd0c7"
Feb 13 15:53:19.090736 containerd[1477]: time="2025-02-13T15:53:19.090550455Z" level=error msg="Failed to destroy network for sandbox \"740841d737d0a66a44e4d16ef90b8d9ab4614c2571a3748908bd730232163245\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:19.091103 containerd[1477]: time="2025-02-13T15:53:19.091046562Z" level=error msg="encountered an error cleaning up failed sandbox \"740841d737d0a66a44e4d16ef90b8d9ab4614c2571a3748908bd730232163245\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:19.091215 containerd[1477]: time="2025-02-13T15:53:19.091170651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5kpsz,Uid:1fda4028-67fe-488c-814b-ef072254e845,Namespace:default,Attempt:7,} failed, error" error="failed to setup network for sandbox \"740841d737d0a66a44e4d16ef90b8d9ab4614c2571a3748908bd730232163245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:19.091531 kubelet[1817]: E0213 15:53:19.091487    1817 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"740841d737d0a66a44e4d16ef90b8d9ab4614c2571a3748908bd730232163245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:53:19.091617 kubelet[1817]: E0213 15:53:19.091574    1817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"740841d737d0a66a44e4d16ef90b8d9ab4614c2571a3748908bd730232163245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-5kpsz"
Feb 13 15:53:19.091617 kubelet[1817]: E0213 15:53:19.091605    1817 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"740841d737d0a66a44e4d16ef90b8d9ab4614c2571a3748908bd730232163245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-5kpsz"
Feb 13 15:53:19.091722 kubelet[1817]: E0213 15:53:19.091696    1817 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-5kpsz_default(1fda4028-67fe-488c-814b-ef072254e845)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-5kpsz_default(1fda4028-67fe-488c-814b-ef072254e845)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"740841d737d0a66a44e4d16ef90b8d9ab4614c2571a3748908bd730232163245\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-5kpsz" podUID="1fda4028-67fe-488c-814b-ef072254e845"
Feb 13 15:53:19.127669 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
Feb 13 15:53:19.127887 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
Feb 13 15:53:19.317902 update_engine[1454]: I20250213 15:53:19.317790  1454 update_attempter.cc:509] Updating boot flags...
Feb 13 15:53:19.361826 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2973)
Feb 13 15:53:19.411968 kubelet[1817]: E0213 15:53:19.411905    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:19.616634 systemd[1]: run-netns-cni\x2d85a74d19\x2dcc0b\x2d641d\x2d9703\x2d32b1ea225589.mount: Deactivated successfully.
Feb 13 15:53:19.616777 systemd[1]: run-netns-cni\x2db298d53c\x2d3dc4\x2dad36\x2d3022\x2da8122e5b0bf9.mount: Deactivated successfully.
Feb 13 15:53:19.697047 sshd-session[2986]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157  user=root
Feb 13 15:53:19.822184 kubelet[1817]: E0213 15:53:19.820775    1817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"
Feb 13 15:53:19.836630 kubelet[1817]: I0213 15:53:19.836590    1817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c06a9e5cbcae0f61af3681b324edb8645c0b75a5ebe4244d0fa985ae54cf822d"
Feb 13 15:53:19.837901 containerd[1477]: time="2025-02-13T15:53:19.837859032Z" level=info msg="StopPodSandbox for \"c06a9e5cbcae0f61af3681b324edb8645c0b75a5ebe4244d0fa985ae54cf822d\""
Feb 13 15:53:19.838367 containerd[1477]: time="2025-02-13T15:53:19.838069376Z" level=info msg="Ensure that sandbox c06a9e5cbcae0f61af3681b324edb8645c0b75a5ebe4244d0fa985ae54cf822d in task-service has been cleanup successfully"
Feb 13 15:53:19.844620 containerd[1477]: time="2025-02-13T15:53:19.842672880Z" level=info msg="TearDown network for sandbox \"c06a9e5cbcae0f61af3681b324edb8645c0b75a5ebe4244d0fa985ae54cf822d\" successfully"
Feb 13 15:53:19.844620 containerd[1477]: time="2025-02-13T15:53:19.842792359Z" level=info msg="StopPodSandbox for \"c06a9e5cbcae0f61af3681b324edb8645c0b75a5ebe4244d0fa985ae54cf822d\" returns successfully"
Feb 13 15:53:19.844009 systemd[1]: run-netns-cni\x2d2b868b41\x2db266\x2ddc9d\x2d2fa0\x2dc67dc714574c.mount: Deactivated successfully.
Feb 13 15:53:19.846363 containerd[1477]: time="2025-02-13T15:53:19.846305795Z" level=info msg="StopPodSandbox for \"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\""
Feb 13 15:53:19.846575 containerd[1477]: time="2025-02-13T15:53:19.846444759Z" level=info msg="TearDown network for sandbox \"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\" successfully"
Feb 13 15:53:19.846575 containerd[1477]: time="2025-02-13T15:53:19.846458087Z" level=info msg="StopPodSandbox for \"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\" returns successfully"
Feb 13 15:53:19.848437 containerd[1477]: time="2025-02-13T15:53:19.848242702Z" level=info msg="StopPodSandbox for \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\""
Feb 13 15:53:19.848554 containerd[1477]: time="2025-02-13T15:53:19.848435471Z" level=info msg="TearDown network for sandbox \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\" successfully"
Feb 13 15:53:19.848554 containerd[1477]: time="2025-02-13T15:53:19.848457013Z" level=info msg="StopPodSandbox for \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\" returns successfully"
Feb 13 15:53:19.849262 containerd[1477]: time="2025-02-13T15:53:19.849220460Z" level=info msg="StopPodSandbox for \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\""
Feb 13 15:53:19.849672 containerd[1477]: time="2025-02-13T15:53:19.849346935Z" level=info msg="TearDown network for sandbox \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\" successfully"
Feb 13 15:53:19.849672 containerd[1477]: time="2025-02-13T15:53:19.849360116Z" level=info msg="StopPodSandbox for \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\" returns successfully"
Feb 13 15:53:19.849958 containerd[1477]: time="2025-02-13T15:53:19.849920095Z" level=info msg="StopPodSandbox for \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\""
Feb 13 15:53:19.850071 containerd[1477]: time="2025-02-13T15:53:19.850035823Z" level=info msg="TearDown network for sandbox \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\" successfully"
Feb 13 15:53:19.850071 containerd[1477]: time="2025-02-13T15:53:19.850052776Z" level=info msg="StopPodSandbox for \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\" returns successfully"
Feb 13 15:53:19.851406 containerd[1477]: time="2025-02-13T15:53:19.851373873Z" level=info msg="StopPodSandbox for \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\""
Feb 13 15:53:19.851515 containerd[1477]: time="2025-02-13T15:53:19.851471376Z" level=info msg="TearDown network for sandbox \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\" successfully"
Feb 13 15:53:19.851515 containerd[1477]: time="2025-02-13T15:53:19.851482804Z" level=info msg="StopPodSandbox for \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\" returns successfully"
Feb 13 15:53:19.852449 containerd[1477]: time="2025-02-13T15:53:19.852423353Z" level=info msg="StopPodSandbox for \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\""
Feb 13 15:53:19.852544 containerd[1477]: time="2025-02-13T15:53:19.852522402Z" level=info msg="TearDown network for sandbox \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\" successfully"
Feb 13 15:53:19.852544 containerd[1477]: time="2025-02-13T15:53:19.852538689Z" level=info msg="StopPodSandbox for \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\" returns successfully"
Feb 13 15:53:19.852838 containerd[1477]: time="2025-02-13T15:53:19.852815098Z" level=info msg="StopPodSandbox for \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\""
Feb 13 15:53:19.852931 containerd[1477]: time="2025-02-13T15:53:19.852912377Z" level=info msg="TearDown network for sandbox \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\" successfully"
Feb 13 15:53:19.852972 containerd[1477]: time="2025-02-13T15:53:19.852931288Z" level=info msg="StopPodSandbox for \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\" returns successfully"
Feb 13 15:53:19.853918 kubelet[1817]: I0213 15:53:19.853428    1817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="740841d737d0a66a44e4d16ef90b8d9ab4614c2571a3748908bd730232163245"
Feb 13 15:53:19.854498 containerd[1477]: time="2025-02-13T15:53:19.854466144Z" level=info msg="StopPodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\""
Feb 13 15:53:19.854716 containerd[1477]: time="2025-02-13T15:53:19.854701147Z" level=info msg="TearDown network for sandbox \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" successfully"
Feb 13 15:53:19.854794 containerd[1477]: time="2025-02-13T15:53:19.854783034Z" level=info msg="StopPodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" returns successfully"
Feb 13 15:53:19.854956 containerd[1477]: time="2025-02-13T15:53:19.854942546Z" level=info msg="StopPodSandbox for \"740841d737d0a66a44e4d16ef90b8d9ab4614c2571a3748908bd730232163245\""
Feb 13 15:53:19.855440 containerd[1477]: time="2025-02-13T15:53:19.855415432Z" level=info msg="Ensure that sandbox 740841d737d0a66a44e4d16ef90b8d9ab4614c2571a3748908bd730232163245 in task-service has been cleanup successfully"
Feb 13 15:53:19.855560 containerd[1477]: time="2025-02-13T15:53:19.855535483Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\""
Feb 13 15:53:19.855731 containerd[1477]: time="2025-02-13T15:53:19.855645411Z" level=info msg="TearDown network for sandbox \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" successfully"
Feb 13 15:53:19.855731 containerd[1477]: time="2025-02-13T15:53:19.855659295Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" returns successfully"
Feb 13 15:53:19.856738 containerd[1477]: time="2025-02-13T15:53:19.856247616Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\""
Feb 13 15:53:19.856738 containerd[1477]: time="2025-02-13T15:53:19.856281963Z" level=info msg="TearDown network for sandbox \"740841d737d0a66a44e4d16ef90b8d9ab4614c2571a3748908bd730232163245\" successfully"
Feb 13 15:53:19.856738 containerd[1477]: time="2025-02-13T15:53:19.856301078Z" level=info msg="StopPodSandbox for \"740841d737d0a66a44e4d16ef90b8d9ab4614c2571a3748908bd730232163245\" returns successfully"
Feb 13 15:53:19.856738 containerd[1477]: time="2025-02-13T15:53:19.856349957Z" level=info msg="TearDown network for sandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" successfully"
Feb 13 15:53:19.856738 containerd[1477]: time="2025-02-13T15:53:19.856366416Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" returns successfully"
Feb 13 15:53:19.861204 containerd[1477]: time="2025-02-13T15:53:19.860956351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:11,}"
Feb 13 15:53:19.861877 systemd[1]: run-netns-cni\x2d2f759d56\x2dd9e1\x2d7bdf\x2d90dc\x2defdf3cbd3727.mount: Deactivated successfully.
Feb 13 15:53:19.862478 containerd[1477]: time="2025-02-13T15:53:19.862330512Z" level=info msg="StopPodSandbox for \"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\""
Feb 13 15:53:19.862478 containerd[1477]: time="2025-02-13T15:53:19.862464730Z" level=info msg="TearDown network for sandbox \"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\" successfully"
Feb 13 15:53:19.862478 containerd[1477]: time="2025-02-13T15:53:19.862477230Z" level=info msg="StopPodSandbox for \"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\" returns successfully"
Feb 13 15:53:19.863626 containerd[1477]: time="2025-02-13T15:53:19.863443747Z" level=info msg="StopPodSandbox for \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\""
Feb 13 15:53:19.863626 containerd[1477]: time="2025-02-13T15:53:19.863545058Z" level=info msg="TearDown network for sandbox \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\" successfully"
Feb 13 15:53:19.863626 containerd[1477]: time="2025-02-13T15:53:19.863559159Z" level=info msg="StopPodSandbox for \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\" returns successfully"
Feb 13 15:53:19.864840 containerd[1477]: time="2025-02-13T15:53:19.864768593Z" level=info msg="StopPodSandbox for \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\""
Feb 13 15:53:19.864965 containerd[1477]: time="2025-02-13T15:53:19.864917527Z" level=info msg="TearDown network for sandbox \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\" successfully"
Feb 13 15:53:19.864965 containerd[1477]: time="2025-02-13T15:53:19.864935038Z" level=info msg="StopPodSandbox for \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\" returns successfully"
Feb 13 15:53:19.865677 containerd[1477]: time="2025-02-13T15:53:19.865633181Z" level=info msg="StopPodSandbox for \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\""
Feb 13 15:53:19.865790 containerd[1477]: time="2025-02-13T15:53:19.865764535Z" level=info msg="TearDown network for sandbox \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\" successfully"
Feb 13 15:53:19.865827 containerd[1477]: time="2025-02-13T15:53:19.865789706Z" level=info msg="StopPodSandbox for \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\" returns successfully"
Feb 13 15:53:19.866897 containerd[1477]: time="2025-02-13T15:53:19.866833339Z" level=info msg="StopPodSandbox for \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\""
Feb 13 15:53:19.868650 containerd[1477]: time="2025-02-13T15:53:19.868473950Z" level=info msg="TearDown network for sandbox \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\" successfully"
Feb 13 15:53:19.868650 containerd[1477]: time="2025-02-13T15:53:19.868520313Z" level=info msg="StopPodSandbox for \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\" returns successfully"
Feb 13 15:53:19.869510 containerd[1477]: time="2025-02-13T15:53:19.869474991Z" level=info msg="StopPodSandbox for \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\""
Feb 13 15:53:19.870326 containerd[1477]: time="2025-02-13T15:53:19.870299172Z" level=info msg="TearDown network for sandbox \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\" successfully"
Feb 13 15:53:19.870755 containerd[1477]: time="2025-02-13T15:53:19.870733675Z" level=info msg="StopPodSandbox for \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\" returns successfully"
Feb 13 15:53:19.872113 containerd[1477]: time="2025-02-13T15:53:19.872075412Z" level=info msg="StopPodSandbox for \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\""
Feb 13 15:53:19.873266 containerd[1477]: time="2025-02-13T15:53:19.872488271Z" level=info msg="TearDown network for sandbox \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\" successfully"
Feb 13 15:53:19.873536 containerd[1477]: time="2025-02-13T15:53:19.873456308Z" level=info msg="StopPodSandbox for \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\" returns successfully"
Feb 13 15:53:19.876598 containerd[1477]: time="2025-02-13T15:53:19.876143459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5kpsz,Uid:1fda4028-67fe-488c-814b-ef072254e845,Namespace:default,Attempt:8,}"
Feb 13 15:53:19.882446 systemd[1]: run-containerd-runc-k8s.io-e62dd8ff2310d85dfec294ecd0653d4258d491e4082bfaf28d7e11a3da1c3d09-runc.TWnz0B.mount: Deactivated successfully.
Feb 13 15:53:19.902866 kubelet[1817]: I0213 15:53:19.902811    1817 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-d26vf" podStartSLOduration=5.467254243 podStartE2EDuration="29.900325329s" podCreationTimestamp="2025-02-13 15:52:50 +0000 UTC" firstStartedPulling="2025-02-13 15:52:54.235244569 +0000 UTC m=+4.462886619" lastFinishedPulling="2025-02-13 15:53:18.668315658 +0000 UTC m=+28.895957705" observedRunningTime="2025-02-13 15:53:19.887612352 +0000 UTC m=+30.115254410" watchObservedRunningTime="2025-02-13 15:53:19.900325329 +0000 UTC m=+30.127967379"
Feb 13 15:53:20.413603 kubelet[1817]: E0213 15:53:20.413508    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:20.419862 systemd-networkd[1374]: cali36f1a4fd186: Link UP
Feb 13 15:53:20.420487 systemd-networkd[1374]: cali36f1a4fd186: Gained carrier
Feb 13 15:53:20.476784 containerd[1477]: 2025-02-13 15:53:19.975 [INFO][3003] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Feb 13 15:53:20.476784 containerd[1477]: 2025-02-13 15:53:20.084 [INFO][3003] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.110.144.28-k8s-csi--node--driver--88c9f-eth0 csi-node-driver- calico-system  0f99465c-7630-4cd4-9490-a0c5effdd0c7 1008 0 2025-02-13 15:52:50 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s  143.110.144.28  csi-node-driver-88c9f eth0 csi-node-driver [] []   [kns.calico-system ksa.calico-system.csi-node-driver] cali36f1a4fd186  [] []}} ContainerID="44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615" Namespace="calico-system" Pod="csi-node-driver-88c9f" WorkloadEndpoint="143.110.144.28-k8s-csi--node--driver--88c9f-"
Feb 13 15:53:20.476784 containerd[1477]: 2025-02-13 15:53:20.084 [INFO][3003] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615" Namespace="calico-system" Pod="csi-node-driver-88c9f" WorkloadEndpoint="143.110.144.28-k8s-csi--node--driver--88c9f-eth0"
Feb 13 15:53:20.476784 containerd[1477]: 2025-02-13 15:53:20.197 [INFO][3031] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615" HandleID="k8s-pod-network.44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615" Workload="143.110.144.28-k8s-csi--node--driver--88c9f-eth0"
Feb 13 15:53:20.476784 containerd[1477]: 2025-02-13 15:53:20.244 [INFO][3031] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615" HandleID="k8s-pod-network.44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615" Workload="143.110.144.28-k8s-csi--node--driver--88c9f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319640), Attrs:map[string]string{"namespace":"calico-system", "node":"143.110.144.28", "pod":"csi-node-driver-88c9f", "timestamp":"2025-02-13 15:53:20.197034987 +0000 UTC"}, Hostname:"143.110.144.28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 15:53:20.476784 containerd[1477]: 2025-02-13 15:53:20.244 [INFO][3031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 15:53:20.476784 containerd[1477]: 2025-02-13 15:53:20.245 [INFO][3031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 15:53:20.476784 containerd[1477]: 2025-02-13 15:53:20.245 [INFO][3031] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.110.144.28'
Feb 13 15:53:20.476784 containerd[1477]: 2025-02-13 15:53:20.253 [INFO][3031] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615" host="143.110.144.28"
Feb 13 15:53:20.476784 containerd[1477]: 2025-02-13 15:53:20.283 [INFO][3031] ipam/ipam.go 372: Looking up existing affinities for host host="143.110.144.28"
Feb 13 15:53:20.476784 containerd[1477]: 2025-02-13 15:53:20.310 [INFO][3031] ipam/ipam.go 489: Trying affinity for 192.168.91.0/26 host="143.110.144.28"
Feb 13 15:53:20.476784 containerd[1477]: 2025-02-13 15:53:20.329 [INFO][3031] ipam/ipam.go 155: Attempting to load block cidr=192.168.91.0/26 host="143.110.144.28"
Feb 13 15:53:20.476784 containerd[1477]: 2025-02-13 15:53:20.351 [INFO][3031] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="143.110.144.28"
Feb 13 15:53:20.476784 containerd[1477]: 2025-02-13 15:53:20.351 [INFO][3031] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615" host="143.110.144.28"
Feb 13 15:53:20.476784 containerd[1477]: 2025-02-13 15:53:20.359 [INFO][3031] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615
Feb 13 15:53:20.476784 containerd[1477]: 2025-02-13 15:53:20.376 [INFO][3031] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615" host="143.110.144.28"
Feb 13 15:53:20.476784 containerd[1477]: 2025-02-13 15:53:20.400 [INFO][3031] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.91.1/26] block=192.168.91.0/26 handle="k8s-pod-network.44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615" host="143.110.144.28"
Feb 13 15:53:20.476784 containerd[1477]: 2025-02-13 15:53:20.400 [INFO][3031] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.1/26] handle="k8s-pod-network.44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615" host="143.110.144.28"
Feb 13 15:53:20.476784 containerd[1477]: 2025-02-13 15:53:20.400 [INFO][3031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 15:53:20.476784 containerd[1477]: 2025-02-13 15:53:20.400 [INFO][3031] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.1/26] IPv6=[] ContainerID="44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615" HandleID="k8s-pod-network.44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615" Workload="143.110.144.28-k8s-csi--node--driver--88c9f-eth0"
Feb 13 15:53:20.479179 containerd[1477]: 2025-02-13 15:53:20.405 [INFO][3003] cni-plugin/k8s.go 386: Populated endpoint ContainerID="44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615" Namespace="calico-system" Pod="csi-node-driver-88c9f" WorkloadEndpoint="143.110.144.28-k8s-csi--node--driver--88c9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.110.144.28-k8s-csi--node--driver--88c9f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0f99465c-7630-4cd4-9490-a0c5effdd0c7", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 52, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.110.144.28", ContainerID:"", Pod:"csi-node-driver-88c9f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali36f1a4fd186", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:53:20.479179 containerd[1477]: 2025-02-13 15:53:20.405 [INFO][3003] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.91.1/32] ContainerID="44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615" Namespace="calico-system" Pod="csi-node-driver-88c9f" WorkloadEndpoint="143.110.144.28-k8s-csi--node--driver--88c9f-eth0"
Feb 13 15:53:20.479179 containerd[1477]: 2025-02-13 15:53:20.405 [INFO][3003] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36f1a4fd186 ContainerID="44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615" Namespace="calico-system" Pod="csi-node-driver-88c9f" WorkloadEndpoint="143.110.144.28-k8s-csi--node--driver--88c9f-eth0"
Feb 13 15:53:20.479179 containerd[1477]: 2025-02-13 15:53:20.420 [INFO][3003] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615" Namespace="calico-system" Pod="csi-node-driver-88c9f" WorkloadEndpoint="143.110.144.28-k8s-csi--node--driver--88c9f-eth0"
Feb 13 15:53:20.479179 containerd[1477]: 2025-02-13 15:53:20.421 [INFO][3003] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615" Namespace="calico-system" Pod="csi-node-driver-88c9f" WorkloadEndpoint="143.110.144.28-k8s-csi--node--driver--88c9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.110.144.28-k8s-csi--node--driver--88c9f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0f99465c-7630-4cd4-9490-a0c5effdd0c7", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 52, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.110.144.28", ContainerID:"44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615", Pod:"csi-node-driver-88c9f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali36f1a4fd186", MAC:"9a:89:22:5b:14:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:53:20.479179 containerd[1477]: 2025-02-13 15:53:20.474 [INFO][3003] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615" Namespace="calico-system" Pod="csi-node-driver-88c9f" WorkloadEndpoint="143.110.144.28-k8s-csi--node--driver--88c9f-eth0"
Feb 13 15:53:20.516093 containerd[1477]: time="2025-02-13T15:53:20.515842632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:53:20.516093 containerd[1477]: time="2025-02-13T15:53:20.515963496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:53:20.516093 containerd[1477]: time="2025-02-13T15:53:20.516047128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:53:20.516903 containerd[1477]: time="2025-02-13T15:53:20.516725098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:53:20.547555 systemd[1]: Started cri-containerd-44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615.scope - libcontainer container 44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615.
Feb 13 15:53:20.609797 systemd-networkd[1374]: calia9d7f0e44e7: Link UP
Feb 13 15:53:20.611846 systemd-networkd[1374]: calia9d7f0e44e7: Gained carrier
Feb 13 15:53:20.637705 containerd[1477]: time="2025-02-13T15:53:20.637524484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88c9f,Uid:0f99465c-7630-4cd4-9490-a0c5effdd0c7,Namespace:calico-system,Attempt:11,} returns sandbox id \"44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615\""
Feb 13 15:53:20.643908 containerd[1477]: 2025-02-13 15:53:20.018 [INFO][3022] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Feb 13 15:53:20.643908 containerd[1477]: 2025-02-13 15:53:20.112 [INFO][3022] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.110.144.28-k8s-nginx--deployment--6d5f899847--5kpsz-eth0 nginx-deployment-6d5f899847- default  1fda4028-67fe-488c-814b-ef072254e845 1128 0 2025-02-13 15:53:11 +0000 UTC <nil> <nil> map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s  143.110.144.28  nginx-deployment-6d5f899847-5kpsz eth0 default [] []   [kns.default ksa.default.default] calia9d7f0e44e7  [] []}} ContainerID="eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5" Namespace="default" Pod="nginx-deployment-6d5f899847-5kpsz" WorkloadEndpoint="143.110.144.28-k8s-nginx--deployment--6d5f899847--5kpsz-"
Feb 13 15:53:20.643908 containerd[1477]: 2025-02-13 15:53:20.112 [INFO][3022] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5" Namespace="default" Pod="nginx-deployment-6d5f899847-5kpsz" WorkloadEndpoint="143.110.144.28-k8s-nginx--deployment--6d5f899847--5kpsz-eth0"
Feb 13 15:53:20.643908 containerd[1477]: 2025-02-13 15:53:20.234 [INFO][3036] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5" HandleID="k8s-pod-network.eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5" Workload="143.110.144.28-k8s-nginx--deployment--6d5f899847--5kpsz-eth0"
Feb 13 15:53:20.643908 containerd[1477]: 2025-02-13 15:53:20.281 [INFO][3036] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5" HandleID="k8s-pod-network.eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5" Workload="143.110.144.28-k8s-nginx--deployment--6d5f899847--5kpsz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fec50), Attrs:map[string]string{"namespace":"default", "node":"143.110.144.28", "pod":"nginx-deployment-6d5f899847-5kpsz", "timestamp":"2025-02-13 15:53:20.234519318 +0000 UTC"}, Hostname:"143.110.144.28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 15:53:20.643908 containerd[1477]: 2025-02-13 15:53:20.281 [INFO][3036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 15:53:20.643908 containerd[1477]: 2025-02-13 15:53:20.401 [INFO][3036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 15:53:20.643908 containerd[1477]: 2025-02-13 15:53:20.403 [INFO][3036] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.110.144.28'
Feb 13 15:53:20.643908 containerd[1477]: 2025-02-13 15:53:20.412 [INFO][3036] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5" host="143.110.144.28"
Feb 13 15:53:20.643908 containerd[1477]: 2025-02-13 15:53:20.443 [INFO][3036] ipam/ipam.go 372: Looking up existing affinities for host host="143.110.144.28"
Feb 13 15:53:20.643908 containerd[1477]: 2025-02-13 15:53:20.486 [INFO][3036] ipam/ipam.go 489: Trying affinity for 192.168.91.0/26 host="143.110.144.28"
Feb 13 15:53:20.643908 containerd[1477]: 2025-02-13 15:53:20.512 [INFO][3036] ipam/ipam.go 155: Attempting to load block cidr=192.168.91.0/26 host="143.110.144.28"
Feb 13 15:53:20.643908 containerd[1477]: 2025-02-13 15:53:20.547 [INFO][3036] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="143.110.144.28"
Feb 13 15:53:20.643908 containerd[1477]: 2025-02-13 15:53:20.547 [INFO][3036] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5" host="143.110.144.28"
Feb 13 15:53:20.643908 containerd[1477]: 2025-02-13 15:53:20.556 [INFO][3036] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5
Feb 13 15:53:20.643908 containerd[1477]: 2025-02-13 15:53:20.576 [INFO][3036] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5" host="143.110.144.28"
Feb 13 15:53:20.643908 containerd[1477]: 2025-02-13 15:53:20.596 [INFO][3036] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.91.2/26] block=192.168.91.0/26 handle="k8s-pod-network.eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5" host="143.110.144.28"
Feb 13 15:53:20.643908 containerd[1477]: 2025-02-13 15:53:20.596 [INFO][3036] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.2/26] handle="k8s-pod-network.eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5" host="143.110.144.28"
Feb 13 15:53:20.643908 containerd[1477]: 2025-02-13 15:53:20.596 [INFO][3036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 15:53:20.643908 containerd[1477]: 2025-02-13 15:53:20.596 [INFO][3036] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.2/26] IPv6=[] ContainerID="eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5" HandleID="k8s-pod-network.eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5" Workload="143.110.144.28-k8s-nginx--deployment--6d5f899847--5kpsz-eth0"
Feb 13 15:53:20.644761 containerd[1477]: 2025-02-13 15:53:20.602 [INFO][3022] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5" Namespace="default" Pod="nginx-deployment-6d5f899847-5kpsz" WorkloadEndpoint="143.110.144.28-k8s-nginx--deployment--6d5f899847--5kpsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.110.144.28-k8s-nginx--deployment--6d5f899847--5kpsz-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"1fda4028-67fe-488c-814b-ef072254e845", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 53, 11, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.110.144.28", ContainerID:"", Pod:"nginx-deployment-6d5f899847-5kpsz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.91.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calia9d7f0e44e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:53:20.644761 containerd[1477]: 2025-02-13 15:53:20.602 [INFO][3022] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.91.2/32] ContainerID="eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5" Namespace="default" Pod="nginx-deployment-6d5f899847-5kpsz" WorkloadEndpoint="143.110.144.28-k8s-nginx--deployment--6d5f899847--5kpsz-eth0"
Feb 13 15:53:20.644761 containerd[1477]: 2025-02-13 15:53:20.602 [INFO][3022] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9d7f0e44e7 ContainerID="eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5" Namespace="default" Pod="nginx-deployment-6d5f899847-5kpsz" WorkloadEndpoint="143.110.144.28-k8s-nginx--deployment--6d5f899847--5kpsz-eth0"
Feb 13 15:53:20.644761 containerd[1477]: 2025-02-13 15:53:20.612 [INFO][3022] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5" Namespace="default" Pod="nginx-deployment-6d5f899847-5kpsz" WorkloadEndpoint="143.110.144.28-k8s-nginx--deployment--6d5f899847--5kpsz-eth0"
Feb 13 15:53:20.644761 containerd[1477]: 2025-02-13 15:53:20.618 [INFO][3022] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5" Namespace="default" Pod="nginx-deployment-6d5f899847-5kpsz" WorkloadEndpoint="143.110.144.28-k8s-nginx--deployment--6d5f899847--5kpsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.110.144.28-k8s-nginx--deployment--6d5f899847--5kpsz-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"1fda4028-67fe-488c-814b-ef072254e845", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 53, 11, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.110.144.28", ContainerID:"eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5", Pod:"nginx-deployment-6d5f899847-5kpsz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.91.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calia9d7f0e44e7", MAC:"76:d7:79:a9:60:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:53:20.644761 containerd[1477]: 2025-02-13 15:53:20.634 [INFO][3022] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5" Namespace="default" Pod="nginx-deployment-6d5f899847-5kpsz" WorkloadEndpoint="143.110.144.28-k8s-nginx--deployment--6d5f899847--5kpsz-eth0"
Feb 13 15:53:20.647688 containerd[1477]: time="2025-02-13T15:53:20.647287369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\""
Feb 13 15:53:20.680203 containerd[1477]: time="2025-02-13T15:53:20.679532131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:53:20.680203 containerd[1477]: time="2025-02-13T15:53:20.679619242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:53:20.680203 containerd[1477]: time="2025-02-13T15:53:20.679644721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:53:20.680203 containerd[1477]: time="2025-02-13T15:53:20.679762473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:53:20.724560 systemd[1]: Started cri-containerd-eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5.scope - libcontainer container eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5.
Feb 13 15:53:20.786836 containerd[1477]: time="2025-02-13T15:53:20.786773679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5kpsz,Uid:1fda4028-67fe-488c-814b-ef072254e845,Namespace:default,Attempt:8,} returns sandbox id \"eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5\""
Feb 13 15:53:20.867618 kubelet[1817]: E0213 15:53:20.867245    1817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"
Feb 13 15:53:20.890407 systemd[1]: run-containerd-runc-k8s.io-e62dd8ff2310d85dfec294ecd0653d4258d491e4082bfaf28d7e11a3da1c3d09-runc.sv22XA.mount: Deactivated successfully.
Feb 13 15:53:21.414430 kubelet[1817]: E0213 15:53:21.414361    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:21.518874 sshd[2855]: PAM: Permission denied for root from 218.92.0.157
Feb 13 15:53:21.685225 kernel: bpftool[3281]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
Feb 13 15:53:21.711530 systemd-networkd[1374]: cali36f1a4fd186: Gained IPv6LL
Feb 13 15:53:21.776262 systemd-networkd[1374]: calia9d7f0e44e7: Gained IPv6LL
Feb 13 15:53:21.803845 sshd-session[3274]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157  user=root
Feb 13 15:53:21.871259 kubelet[1817]: E0213 15:53:21.871023    1817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"
Feb 13 15:53:21.911489 systemd[1]: run-containerd-runc-k8s.io-e62dd8ff2310d85dfec294ecd0653d4258d491e4082bfaf28d7e11a3da1c3d09-runc.1KaVgJ.mount: Deactivated successfully.
Feb 13 15:53:22.236988 systemd-networkd[1374]: vxlan.calico: Link UP
Feb 13 15:53:22.236999 systemd-networkd[1374]: vxlan.calico: Gained carrier
Feb 13 15:53:22.415793 kubelet[1817]: E0213 15:53:22.415494    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:22.594123 containerd[1477]: time="2025-02-13T15:53:22.592893034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:53:22.596945 containerd[1477]: time="2025-02-13T15:53:22.595260450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632"
Feb 13 15:53:22.599936 containerd[1477]: time="2025-02-13T15:53:22.599782285Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:53:22.607866 containerd[1477]: time="2025-02-13T15:53:22.607774185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:53:22.610413 containerd[1477]: time="2025-02-13T15:53:22.610346897Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.963008902s"
Feb 13 15:53:22.610413 containerd[1477]: time="2025-02-13T15:53:22.610407801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\""
Feb 13 15:53:22.611690 containerd[1477]: time="2025-02-13T15:53:22.611494182Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb 13 15:53:22.615242 containerd[1477]: time="2025-02-13T15:53:22.614556070Z" level=info msg="CreateContainer within sandbox \"44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}"
Feb 13 15:53:22.673970 containerd[1477]: time="2025-02-13T15:53:22.673197034Z" level=info msg="CreateContainer within sandbox \"44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5e30d1fea69e61fa7664be6c6bf8e037269f177a65e4ca2c991fd026033616da\""
Feb 13 15:53:22.675457 containerd[1477]: time="2025-02-13T15:53:22.675400976Z" level=info msg="StartContainer for \"5e30d1fea69e61fa7664be6c6bf8e037269f177a65e4ca2c991fd026033616da\""
Feb 13 15:53:22.772747 systemd[1]: Started cri-containerd-5e30d1fea69e61fa7664be6c6bf8e037269f177a65e4ca2c991fd026033616da.scope - libcontainer container 5e30d1fea69e61fa7664be6c6bf8e037269f177a65e4ca2c991fd026033616da.
Feb 13 15:53:22.836554 containerd[1477]: time="2025-02-13T15:53:22.836493574Z" level=info msg="StartContainer for \"5e30d1fea69e61fa7664be6c6bf8e037269f177a65e4ca2c991fd026033616da\" returns successfully"
Feb 13 15:53:23.416328 kubelet[1817]: E0213 15:53:23.416230    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:23.823460 systemd-networkd[1374]: vxlan.calico: Gained IPv6LL
Feb 13 15:53:24.238329 sshd[2855]: PAM: Permission denied for root from 218.92.0.157
Feb 13 15:53:24.419481 kubelet[1817]: E0213 15:53:24.419305    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:24.531641 sshd-session[3428]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157  user=root
Feb 13 15:53:25.420223 kubelet[1817]: E0213 15:53:25.420115    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:26.093053 kubelet[1817]: E0213 15:53:26.093003    1817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"
Feb 13 15:53:26.142531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount489271767.mount: Deactivated successfully.
Feb 13 15:53:26.421836 kubelet[1817]: E0213 15:53:26.421353    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:26.708619 sshd[2855]: PAM: Permission denied for root from 218.92.0.157
Feb 13 15:53:26.850764 sshd[2855]: Received disconnect from 218.92.0.157 port 20651:11:  [preauth]
Feb 13 15:53:26.850764 sshd[2855]: Disconnected from authenticating user root 218.92.0.157 port 20651 [preauth]
Feb 13 15:53:26.853419 systemd[1]: sshd@10-143.110.144.28:22-218.92.0.157:20651.service: Deactivated successfully.
Feb 13 15:53:27.421840 kubelet[1817]: E0213 15:53:27.421677    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:27.929260 containerd[1477]: time="2025-02-13T15:53:27.929108806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:53:27.932559 containerd[1477]: time="2025-02-13T15:53:27.932447068Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493"
Feb 13 15:53:27.935077 containerd[1477]: time="2025-02-13T15:53:27.934988978Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:53:27.939187 containerd[1477]: time="2025-02-13T15:53:27.939061688Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 5.327516122s"
Feb 13 15:53:27.939187 containerd[1477]: time="2025-02-13T15:53:27.939132217Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\""
Feb 13 15:53:27.939419 containerd[1477]: time="2025-02-13T15:53:27.939278432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:53:27.940398 containerd[1477]: time="2025-02-13T15:53:27.940034379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\""
Feb 13 15:53:27.943416 containerd[1477]: time="2025-02-13T15:53:27.943355772Z" level=info msg="CreateContainer within sandbox \"eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5\" for container &ContainerMetadata{Name:nginx,Attempt:0,}"
Feb 13 15:53:27.967824 containerd[1477]: time="2025-02-13T15:53:27.967629919Z" level=info msg="CreateContainer within sandbox \"eb823307c7cd522fff6d9fc727fa395d29a6142a6feda94f692c93a0768105e5\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"5de240028991cedb25dc2f446c94dd81c52f3802aaf4cb791772a2f4357810c9\""
Feb 13 15:53:27.968836 containerd[1477]: time="2025-02-13T15:53:27.968715193Z" level=info msg="StartContainer for \"5de240028991cedb25dc2f446c94dd81c52f3802aaf4cb791772a2f4357810c9\""
Feb 13 15:53:28.017057 systemd[1]: Started cri-containerd-5de240028991cedb25dc2f446c94dd81c52f3802aaf4cb791772a2f4357810c9.scope - libcontainer container 5de240028991cedb25dc2f446c94dd81c52f3802aaf4cb791772a2f4357810c9.
Feb 13 15:53:28.066499 containerd[1477]: time="2025-02-13T15:53:28.066430493Z" level=info msg="StartContainer for \"5de240028991cedb25dc2f446c94dd81c52f3802aaf4cb791772a2f4357810c9\" returns successfully"
Feb 13 15:53:28.422236 kubelet[1817]: E0213 15:53:28.422119    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:28.938027 kubelet[1817]: I0213 15:53:28.937501    1817 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-5kpsz" podStartSLOduration=10.787577466 podStartE2EDuration="17.937442425s" podCreationTimestamp="2025-02-13 15:53:11 +0000 UTC" firstStartedPulling="2025-02-13 15:53:20.789968694 +0000 UTC m=+31.017610738" lastFinishedPulling="2025-02-13 15:53:27.939833644 +0000 UTC m=+38.167475697" observedRunningTime="2025-02-13 15:53:28.936740375 +0000 UTC m=+39.164382433" watchObservedRunningTime="2025-02-13 15:53:28.937442425 +0000 UTC m=+39.165084487"
Feb 13 15:53:29.422631 kubelet[1817]: E0213 15:53:29.422549    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:29.571523 containerd[1477]: time="2025-02-13T15:53:29.571415021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:53:29.573198 containerd[1477]: time="2025-02-13T15:53:29.572896881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081"
Feb 13 15:53:29.575137 containerd[1477]: time="2025-02-13T15:53:29.575033968Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:53:29.578818 containerd[1477]: time="2025-02-13T15:53:29.578731306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:53:29.579886 containerd[1477]: time="2025-02-13T15:53:29.579660632Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.639581482s"
Feb 13 15:53:29.579886 containerd[1477]: time="2025-02-13T15:53:29.579727635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\""
Feb 13 15:53:29.582362 containerd[1477]: time="2025-02-13T15:53:29.582304824Z" level=info msg="CreateContainer within sandbox \"44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}"
Feb 13 15:53:29.604397 containerd[1477]: time="2025-02-13T15:53:29.604077407Z" level=info msg="CreateContainer within sandbox \"44480ec4f107f1ebf1ebbb337f83fb31b6da6d3b0e1e76d911801317c8cec615\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5d20dbf9a518c0b7c2d5ae1f616e54fabde6f511dd96fdda423741420499fd05\""
Feb 13 15:53:29.606374 containerd[1477]: time="2025-02-13T15:53:29.605027538Z" level=info msg="StartContainer for \"5d20dbf9a518c0b7c2d5ae1f616e54fabde6f511dd96fdda423741420499fd05\""
Feb 13 15:53:29.650410 systemd[1]: run-containerd-runc-k8s.io-5d20dbf9a518c0b7c2d5ae1f616e54fabde6f511dd96fdda423741420499fd05-runc.yOKNmo.mount: Deactivated successfully.
Feb 13 15:53:29.659540 systemd[1]: Started cri-containerd-5d20dbf9a518c0b7c2d5ae1f616e54fabde6f511dd96fdda423741420499fd05.scope - libcontainer container 5d20dbf9a518c0b7c2d5ae1f616e54fabde6f511dd96fdda423741420499fd05.
Feb 13 15:53:29.708696 containerd[1477]: time="2025-02-13T15:53:29.708639323Z" level=info msg="StartContainer for \"5d20dbf9a518c0b7c2d5ae1f616e54fabde6f511dd96fdda423741420499fd05\" returns successfully"
Feb 13 15:53:30.356671 kubelet[1817]: E0213 15:53:30.356590    1817 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:30.423559 kubelet[1817]: E0213 15:53:30.423468    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:30.561526 kubelet[1817]: I0213 15:53:30.561405    1817 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0
Feb 13 15:53:30.563168 kubelet[1817]: I0213 15:53:30.562847    1817 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock
Feb 13 15:53:31.424763 kubelet[1817]: E0213 15:53:31.424659    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:32.426084 kubelet[1817]: E0213 15:53:32.425994    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:33.426674 kubelet[1817]: E0213 15:53:33.426584    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:34.427497 kubelet[1817]: E0213 15:53:34.427403    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:35.428129 kubelet[1817]: E0213 15:53:35.428050    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:36.428720 kubelet[1817]: E0213 15:53:36.428640    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:37.429373 kubelet[1817]: E0213 15:53:37.429300    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:37.574492 kubelet[1817]: I0213 15:53:37.574389    1817 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-88c9f" podStartSLOduration=38.64115653 podStartE2EDuration="47.574320233s" podCreationTimestamp="2025-02-13 15:52:50 +0000 UTC" firstStartedPulling="2025-02-13 15:53:20.64689659 +0000 UTC m=+30.874538620" lastFinishedPulling="2025-02-13 15:53:29.580060277 +0000 UTC m=+39.807702323" observedRunningTime="2025-02-13 15:53:29.963555802 +0000 UTC m=+40.191197875" watchObservedRunningTime="2025-02-13 15:53:37.574320233 +0000 UTC m=+47.801962286"
Feb 13 15:53:37.574903 kubelet[1817]: I0213 15:53:37.574865    1817 topology_manager.go:215] "Topology Admit Handler" podUID="ef4e16ff-760e-47ae-81a7-55b7553e6ba4" podNamespace="default" podName="nfs-server-provisioner-0"
Feb 13 15:53:37.585294 systemd[1]: Created slice kubepods-besteffort-podef4e16ff_760e_47ae_81a7_55b7553e6ba4.slice - libcontainer container kubepods-besteffort-podef4e16ff_760e_47ae_81a7_55b7553e6ba4.slice.
Feb 13 15:53:37.622700 kubelet[1817]: I0213 15:53:37.622630    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gzkb\" (UniqueName: \"kubernetes.io/projected/ef4e16ff-760e-47ae-81a7-55b7553e6ba4-kube-api-access-2gzkb\") pod \"nfs-server-provisioner-0\" (UID: \"ef4e16ff-760e-47ae-81a7-55b7553e6ba4\") " pod="default/nfs-server-provisioner-0"
Feb 13 15:53:37.622700 kubelet[1817]: I0213 15:53:37.622724    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ef4e16ff-760e-47ae-81a7-55b7553e6ba4-data\") pod \"nfs-server-provisioner-0\" (UID: \"ef4e16ff-760e-47ae-81a7-55b7553e6ba4\") " pod="default/nfs-server-provisioner-0"
Feb 13 15:53:37.890518 containerd[1477]: time="2025-02-13T15:53:37.890448305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ef4e16ff-760e-47ae-81a7-55b7553e6ba4,Namespace:default,Attempt:0,}"
Feb 13 15:53:38.301663 systemd-networkd[1374]: cali60e51b789ff: Link UP
Feb 13 15:53:38.305610 systemd-networkd[1374]: cali60e51b789ff: Gained carrier
Feb 13 15:53:38.393794 containerd[1477]: 2025-02-13 15:53:37.996 [INFO][3607] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.110.144.28-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default  ef4e16ff-760e-47ae-81a7-55b7553e6ba4 1336 0 2025-02-13 15:53:37 +0000 UTC <nil> <nil> map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s  143.110.144.28  nfs-server-provisioner-0 eth0 nfs-server-provisioner [] []   [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff  [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.110.144.28-k8s-nfs--server--provisioner--0-"
Feb 13 15:53:38.393794 containerd[1477]: 2025-02-13 15:53:37.996 [INFO][3607] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.110.144.28-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:53:38.393794 containerd[1477]: 2025-02-13 15:53:38.071 [INFO][3617] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef" HandleID="k8s-pod-network.01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef" Workload="143.110.144.28-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:53:38.393794 containerd[1477]: 2025-02-13 15:53:38.102 [INFO][3617] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef" HandleID="k8s-pod-network.01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef" Workload="143.110.144.28-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011c580), Attrs:map[string]string{"namespace":"default", "node":"143.110.144.28", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 15:53:38.071113415 +0000 UTC"}, Hostname:"143.110.144.28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 15:53:38.393794 containerd[1477]: 2025-02-13 15:53:38.102 [INFO][3617] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 15:53:38.393794 containerd[1477]: 2025-02-13 15:53:38.102 [INFO][3617] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 15:53:38.393794 containerd[1477]: 2025-02-13 15:53:38.103 [INFO][3617] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.110.144.28'
Feb 13 15:53:38.393794 containerd[1477]: 2025-02-13 15:53:38.108 [INFO][3617] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef" host="143.110.144.28"
Feb 13 15:53:38.393794 containerd[1477]: 2025-02-13 15:53:38.124 [INFO][3617] ipam/ipam.go 372: Looking up existing affinities for host host="143.110.144.28"
Feb 13 15:53:38.393794 containerd[1477]: 2025-02-13 15:53:38.172 [INFO][3617] ipam/ipam.go 489: Trying affinity for 192.168.91.0/26 host="143.110.144.28"
Feb 13 15:53:38.393794 containerd[1477]: 2025-02-13 15:53:38.181 [INFO][3617] ipam/ipam.go 155: Attempting to load block cidr=192.168.91.0/26 host="143.110.144.28"
Feb 13 15:53:38.393794 containerd[1477]: 2025-02-13 15:53:38.187 [INFO][3617] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="143.110.144.28"
Feb 13 15:53:38.393794 containerd[1477]: 2025-02-13 15:53:38.187 [INFO][3617] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef" host="143.110.144.28"
Feb 13 15:53:38.393794 containerd[1477]: 2025-02-13 15:53:38.194 [INFO][3617] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef
Feb 13 15:53:38.393794 containerd[1477]: 2025-02-13 15:53:38.240 [INFO][3617] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef" host="143.110.144.28"
Feb 13 15:53:38.393794 containerd[1477]: 2025-02-13 15:53:38.278 [INFO][3617] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.91.3/26] block=192.168.91.0/26 handle="k8s-pod-network.01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef" host="143.110.144.28"
Feb 13 15:53:38.393794 containerd[1477]: 2025-02-13 15:53:38.278 [INFO][3617] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.3/26] handle="k8s-pod-network.01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef" host="143.110.144.28"
Feb 13 15:53:38.393794 containerd[1477]: 2025-02-13 15:53:38.278 [INFO][3617] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 15:53:38.393794 containerd[1477]: 2025-02-13 15:53:38.278 [INFO][3617] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.3/26] IPv6=[] ContainerID="01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef" HandleID="k8s-pod-network.01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef" Workload="143.110.144.28-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:53:38.395343 containerd[1477]: 2025-02-13 15:53:38.282 [INFO][3607] cni-plugin/k8s.go 386: Populated endpoint ContainerID="01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.110.144.28-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.110.144.28-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"ef4e16ff-760e-47ae-81a7-55b7553e6ba4", ResourceVersion:"1336", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 53, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.110.144.28", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.91.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:53:38.395343 containerd[1477]: 2025-02-13 15:53:38.292 [INFO][3607] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.91.3/32] ContainerID="01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.110.144.28-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:53:38.395343 containerd[1477]: 2025-02-13 15:53:38.292 [INFO][3607] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.110.144.28-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:53:38.395343 containerd[1477]: 2025-02-13 15:53:38.304 [INFO][3607] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.110.144.28-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:53:38.397295 containerd[1477]: 2025-02-13 15:53:38.305 [INFO][3607] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.110.144.28-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.110.144.28-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"ef4e16ff-760e-47ae-81a7-55b7553e6ba4", ResourceVersion:"1336", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 53, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.110.144.28", ContainerID:"01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.91.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"46:ff:c1:8c:fa:94", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:53:38.397295 containerd[1477]: 2025-02-13 15:53:38.390 [INFO][3607] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.110.144.28-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:53:38.429825 kubelet[1817]: E0213 15:53:38.429728    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:38.451264 containerd[1477]: time="2025-02-13T15:53:38.451067179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:53:38.452581 containerd[1477]: time="2025-02-13T15:53:38.452472694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:53:38.453407 containerd[1477]: time="2025-02-13T15:53:38.453336662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:53:38.454740 containerd[1477]: time="2025-02-13T15:53:38.454661698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:53:38.505117 systemd[1]: Started cri-containerd-01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef.scope - libcontainer container 01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef.
Feb 13 15:53:38.593894 containerd[1477]: time="2025-02-13T15:53:38.593570021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ef4e16ff-760e-47ae-81a7-55b7553e6ba4,Namespace:default,Attempt:0,} returns sandbox id \"01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef\""
Feb 13 15:53:38.599966 containerd[1477]: time="2025-02-13T15:53:38.599894038Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\""
Feb 13 15:53:39.438728 kubelet[1817]: E0213 15:53:39.430341    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:40.144998 systemd-networkd[1374]: cali60e51b789ff: Gained IPv6LL
Feb 13 15:53:40.431297 kubelet[1817]: E0213 15:53:40.430806    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:40.539192 systemd[1]: Started sshd@11-143.110.144.28:22-193.32.162.139:54942.service - OpenSSH per-connection server daemon (193.32.162.139:54942).
Feb 13 15:53:41.431137 kubelet[1817]: E0213 15:53:41.431078    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:41.484702 sshd[3683]: Connection closed by authenticating user root 193.32.162.139 port 54942 [preauth]
Feb 13 15:53:41.487349 systemd[1]: sshd@11-143.110.144.28:22-193.32.162.139:54942.service: Deactivated successfully.
Feb 13 15:53:42.397606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3814049494.mount: Deactivated successfully.
Feb 13 15:53:42.437537 kubelet[1817]: E0213 15:53:42.434511    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:43.435302 kubelet[1817]: E0213 15:53:43.435121    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:44.436112 kubelet[1817]: E0213 15:53:44.435976    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:45.436809 kubelet[1817]: E0213 15:53:45.436702    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:46.099992 containerd[1477]: time="2025-02-13T15:53:46.099847229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:53:46.103335 containerd[1477]: time="2025-02-13T15:53:46.103244534Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406"
Feb 13 15:53:46.105198 containerd[1477]: time="2025-02-13T15:53:46.105033828Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:53:46.111335 containerd[1477]: time="2025-02-13T15:53:46.111180801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:53:46.144578 containerd[1477]: time="2025-02-13T15:53:46.144340203Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 7.544380359s"
Feb 13 15:53:46.144578 containerd[1477]: time="2025-02-13T15:53:46.144419503Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\""
Feb 13 15:53:46.148617 containerd[1477]: time="2025-02-13T15:53:46.148550532Z" level=info msg="CreateContainer within sandbox \"01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}"
Feb 13 15:53:46.170775 containerd[1477]: time="2025-02-13T15:53:46.170509656Z" level=info msg="CreateContainer within sandbox \"01213800429e876de8dc906c33cb6fd5da52f6ffd9f9a67bd648902de67c67ef\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"29cbe5a566ddeac53c5543e23f7a4989d7c34d77353bb4ac4249eff0eef7dfd7\""
Feb 13 15:53:46.171864 containerd[1477]: time="2025-02-13T15:53:46.171711221Z" level=info msg="StartContainer for \"29cbe5a566ddeac53c5543e23f7a4989d7c34d77353bb4ac4249eff0eef7dfd7\""
Feb 13 15:53:46.224538 systemd[1]: Started cri-containerd-29cbe5a566ddeac53c5543e23f7a4989d7c34d77353bb4ac4249eff0eef7dfd7.scope - libcontainer container 29cbe5a566ddeac53c5543e23f7a4989d7c34d77353bb4ac4249eff0eef7dfd7.
Feb 13 15:53:46.266380 containerd[1477]: time="2025-02-13T15:53:46.265900296Z" level=info msg="StartContainer for \"29cbe5a566ddeac53c5543e23f7a4989d7c34d77353bb4ac4249eff0eef7dfd7\" returns successfully"
Feb 13 15:53:46.438288 kubelet[1817]: E0213 15:53:46.437326    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:47.437867 kubelet[1817]: E0213 15:53:47.437768    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:48.438445 kubelet[1817]: E0213 15:53:48.438368    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:49.439706 kubelet[1817]: E0213 15:53:49.439623    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:50.356681 kubelet[1817]: E0213 15:53:50.356585    1817 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:50.387519 containerd[1477]: time="2025-02-13T15:53:50.387440705Z" level=info msg="StopPodSandbox for \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\""
Feb 13 15:53:50.388074 containerd[1477]: time="2025-02-13T15:53:50.387581194Z" level=info msg="TearDown network for sandbox \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\" successfully"
Feb 13 15:53:50.388074 containerd[1477]: time="2025-02-13T15:53:50.387594037Z" level=info msg="StopPodSandbox for \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\" returns successfully"
Feb 13 15:53:50.393531 containerd[1477]: time="2025-02-13T15:53:50.393392019Z" level=info msg="RemovePodSandbox for \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\""
Feb 13 15:53:50.401923 containerd[1477]: time="2025-02-13T15:53:50.401651283Z" level=info msg="Forcibly stopping sandbox \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\""
Feb 13 15:53:50.422240 containerd[1477]: time="2025-02-13T15:53:50.401842834Z" level=info msg="TearDown network for sandbox \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\" successfully"
Feb 13 15:53:50.441430 kubelet[1817]: E0213 15:53:50.440034    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:50.457608 containerd[1477]: time="2025-02-13T15:53:50.456947282Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:53:50.457608 containerd[1477]: time="2025-02-13T15:53:50.457065967Z" level=info msg="RemovePodSandbox \"5f4318767110a335b1f01bf6b76b9288fe5d735a8ffe6ed64e5f1082ed7b331b\" returns successfully"
Feb 13 15:53:50.457858 containerd[1477]: time="2025-02-13T15:53:50.457786145Z" level=info msg="StopPodSandbox for \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\""
Feb 13 15:53:50.458275 containerd[1477]: time="2025-02-13T15:53:50.457933738Z" level=info msg="TearDown network for sandbox \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\" successfully"
Feb 13 15:53:50.458275 containerd[1477]: time="2025-02-13T15:53:50.458257619Z" level=info msg="StopPodSandbox for \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\" returns successfully"
Feb 13 15:53:50.459190 containerd[1477]: time="2025-02-13T15:53:50.458781658Z" level=info msg="RemovePodSandbox for \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\""
Feb 13 15:53:50.459190 containerd[1477]: time="2025-02-13T15:53:50.458816302Z" level=info msg="Forcibly stopping sandbox \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\""
Feb 13 15:53:50.459190 containerd[1477]: time="2025-02-13T15:53:50.458908681Z" level=info msg="TearDown network for sandbox \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\" successfully"
Feb 13 15:53:50.461879 containerd[1477]: time="2025-02-13T15:53:50.461771172Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:53:50.461879 containerd[1477]: time="2025-02-13T15:53:50.461846963Z" level=info msg="RemovePodSandbox \"4fd9b9d408c3ac236620f9b9f9aa7184ade84cda8a1b4a024dba2821ccbaf825\" returns successfully"
Feb 13 15:53:50.463065 containerd[1477]: time="2025-02-13T15:53:50.462803519Z" level=info msg="StopPodSandbox for \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\""
Feb 13 15:53:50.463065 containerd[1477]: time="2025-02-13T15:53:50.462920838Z" level=info msg="TearDown network for sandbox \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\" successfully"
Feb 13 15:53:50.463065 containerd[1477]: time="2025-02-13T15:53:50.462932947Z" level=info msg="StopPodSandbox for \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\" returns successfully"
Feb 13 15:53:50.464238 containerd[1477]: time="2025-02-13T15:53:50.463382194Z" level=info msg="RemovePodSandbox for \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\""
Feb 13 15:53:50.464238 containerd[1477]: time="2025-02-13T15:53:50.463411461Z" level=info msg="Forcibly stopping sandbox \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\""
Feb 13 15:53:50.464238 containerd[1477]: time="2025-02-13T15:53:50.463546147Z" level=info msg="TearDown network for sandbox \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\" successfully"
Feb 13 15:53:50.467493 containerd[1477]: time="2025-02-13T15:53:50.467419518Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:53:50.467493 containerd[1477]: time="2025-02-13T15:53:50.467506924Z" level=info msg="RemovePodSandbox \"d7d8a25cfd39c7d7f04a47cf5d4017e4bf4fd8a3a5a076d63f624b19204842f8\" returns successfully"
Feb 13 15:53:50.468421 containerd[1477]: time="2025-02-13T15:53:50.468100862Z" level=info msg="StopPodSandbox for \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\""
Feb 13 15:53:50.468421 containerd[1477]: time="2025-02-13T15:53:50.468235023Z" level=info msg="TearDown network for sandbox \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\" successfully"
Feb 13 15:53:50.468421 containerd[1477]: time="2025-02-13T15:53:50.468247438Z" level=info msg="StopPodSandbox for \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\" returns successfully"
Feb 13 15:53:50.468955 containerd[1477]: time="2025-02-13T15:53:50.468925385Z" level=info msg="RemovePodSandbox for \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\""
Feb 13 15:53:50.468955 containerd[1477]: time="2025-02-13T15:53:50.468957742Z" level=info msg="Forcibly stopping sandbox \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\""
Feb 13 15:53:50.469076 containerd[1477]: time="2025-02-13T15:53:50.469038044Z" level=info msg="TearDown network for sandbox \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\" successfully"
Feb 13 15:53:50.473719 containerd[1477]: time="2025-02-13T15:53:50.473614287Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:53:50.473719 containerd[1477]: time="2025-02-13T15:53:50.473686603Z" level=info msg="RemovePodSandbox \"dfd62bec91777421667f4791a33482a4814050d13db8b2dd5ed6cf403dc0beea\" returns successfully"
Feb 13 15:53:50.474642 containerd[1477]: time="2025-02-13T15:53:50.474588243Z" level=info msg="StopPodSandbox for \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\""
Feb 13 15:53:50.474739 containerd[1477]: time="2025-02-13T15:53:50.474717876Z" level=info msg="TearDown network for sandbox \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\" successfully"
Feb 13 15:53:50.474739 containerd[1477]: time="2025-02-13T15:53:50.474728925Z" level=info msg="StopPodSandbox for \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\" returns successfully"
Feb 13 15:53:50.475329 containerd[1477]: time="2025-02-13T15:53:50.475196780Z" level=info msg="RemovePodSandbox for \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\""
Feb 13 15:53:50.475329 containerd[1477]: time="2025-02-13T15:53:50.475222130Z" level=info msg="Forcibly stopping sandbox \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\""
Feb 13 15:53:50.475329 containerd[1477]: time="2025-02-13T15:53:50.475292373Z" level=info msg="TearDown network for sandbox \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\" successfully"
Feb 13 15:53:50.479140 containerd[1477]: time="2025-02-13T15:53:50.479054809Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:53:50.479140 containerd[1477]: time="2025-02-13T15:53:50.479172452Z" level=info msg="RemovePodSandbox \"2f5ce3aed72fcc1dcf56e35e64307b01de0e3296e3b5e0857940f18a8a352dc3\" returns successfully"
Feb 13 15:53:50.479867 containerd[1477]: time="2025-02-13T15:53:50.479823528Z" level=info msg="StopPodSandbox for \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\""
Feb 13 15:53:50.480008 containerd[1477]: time="2025-02-13T15:53:50.479974666Z" level=info msg="TearDown network for sandbox \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\" successfully"
Feb 13 15:53:50.480008 containerd[1477]: time="2025-02-13T15:53:50.479997726Z" level=info msg="StopPodSandbox for \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\" returns successfully"
Feb 13 15:53:50.480510 containerd[1477]: time="2025-02-13T15:53:50.480483608Z" level=info msg="RemovePodSandbox for \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\""
Feb 13 15:53:50.481185 containerd[1477]: time="2025-02-13T15:53:50.480667679Z" level=info msg="Forcibly stopping sandbox \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\""
Feb 13 15:53:50.481185 containerd[1477]: time="2025-02-13T15:53:50.480817004Z" level=info msg="TearDown network for sandbox \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\" successfully"
Feb 13 15:53:50.485182 containerd[1477]: time="2025-02-13T15:53:50.485020116Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:53:50.485440 containerd[1477]: time="2025-02-13T15:53:50.485197283Z" level=info msg="RemovePodSandbox \"80be589344e32c9220593b84e9846fe4bd35568322f3ffce7358d1025292ff00\" returns successfully"
Feb 13 15:53:50.486555 containerd[1477]: time="2025-02-13T15:53:50.486308986Z" level=info msg="StopPodSandbox for \"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\""
Feb 13 15:53:50.487418 containerd[1477]: time="2025-02-13T15:53:50.486852617Z" level=info msg="TearDown network for sandbox \"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\" successfully"
Feb 13 15:53:50.487418 containerd[1477]: time="2025-02-13T15:53:50.486885676Z" level=info msg="StopPodSandbox for \"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\" returns successfully"
Feb 13 15:53:50.487620 containerd[1477]: time="2025-02-13T15:53:50.487585920Z" level=info msg="RemovePodSandbox for \"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\""
Feb 13 15:53:50.487664 containerd[1477]: time="2025-02-13T15:53:50.487628345Z" level=info msg="Forcibly stopping sandbox \"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\""
Feb 13 15:53:50.487848 containerd[1477]: time="2025-02-13T15:53:50.487757115Z" level=info msg="TearDown network for sandbox \"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\" successfully"
Feb 13 15:53:50.492295 containerd[1477]: time="2025-02-13T15:53:50.492207374Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:53:50.492499 containerd[1477]: time="2025-02-13T15:53:50.492347217Z" level=info msg="RemovePodSandbox \"9c86813550e4cb18da241b718833461afbbc4762b48862be6c9400fbd63e7139\" returns successfully"
Feb 13 15:53:50.493497 containerd[1477]: time="2025-02-13T15:53:50.493242129Z" level=info msg="StopPodSandbox for \"740841d737d0a66a44e4d16ef90b8d9ab4614c2571a3748908bd730232163245\""
Feb 13 15:53:50.493497 containerd[1477]: time="2025-02-13T15:53:50.493377307Z" level=info msg="TearDown network for sandbox \"740841d737d0a66a44e4d16ef90b8d9ab4614c2571a3748908bd730232163245\" successfully"
Feb 13 15:53:50.493497 containerd[1477]: time="2025-02-13T15:53:50.493392380Z" level=info msg="StopPodSandbox for \"740841d737d0a66a44e4d16ef90b8d9ab4614c2571a3748908bd730232163245\" returns successfully"
Feb 13 15:53:50.494730 containerd[1477]: time="2025-02-13T15:53:50.494697131Z" level=info msg="RemovePodSandbox for \"740841d737d0a66a44e4d16ef90b8d9ab4614c2571a3748908bd730232163245\""
Feb 13 15:53:50.494939 containerd[1477]: time="2025-02-13T15:53:50.494919449Z" level=info msg="Forcibly stopping sandbox \"740841d737d0a66a44e4d16ef90b8d9ab4614c2571a3748908bd730232163245\""
Feb 13 15:53:50.496022 containerd[1477]: time="2025-02-13T15:53:50.495085122Z" level=info msg="TearDown network for sandbox \"740841d737d0a66a44e4d16ef90b8d9ab4614c2571a3748908bd730232163245\" successfully"
Feb 13 15:53:50.500735 containerd[1477]: time="2025-02-13T15:53:50.500458208Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"740841d737d0a66a44e4d16ef90b8d9ab4614c2571a3748908bd730232163245\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:53:50.500735 containerd[1477]: time="2025-02-13T15:53:50.500556986Z" level=info msg="RemovePodSandbox \"740841d737d0a66a44e4d16ef90b8d9ab4614c2571a3748908bd730232163245\" returns successfully"
Feb 13 15:53:50.501628 containerd[1477]: time="2025-02-13T15:53:50.501590903Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\""
Feb 13 15:53:50.501751 containerd[1477]: time="2025-02-13T15:53:50.501728712Z" level=info msg="TearDown network for sandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" successfully"
Feb 13 15:53:50.501822 containerd[1477]: time="2025-02-13T15:53:50.501752213Z" level=info msg="StopPodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" returns successfully"
Feb 13 15:53:50.503462 containerd[1477]: time="2025-02-13T15:53:50.502633994Z" level=info msg="RemovePodSandbox for \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\""
Feb 13 15:53:50.503462 containerd[1477]: time="2025-02-13T15:53:50.502681404Z" level=info msg="Forcibly stopping sandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\""
Feb 13 15:53:50.503462 containerd[1477]: time="2025-02-13T15:53:50.502784102Z" level=info msg="TearDown network for sandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" successfully"
Feb 13 15:53:50.507631 containerd[1477]: time="2025-02-13T15:53:50.507564585Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:53:50.507950 containerd[1477]: time="2025-02-13T15:53:50.507891827Z" level=info msg="RemovePodSandbox \"1c54a47037b08c21c7e03a4fd2378e5f11129f6b15bc757e4af7b9e944596ede\" returns successfully"
Feb 13 15:53:50.509029 containerd[1477]: time="2025-02-13T15:53:50.508684976Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\""
Feb 13 15:53:50.509029 containerd[1477]: time="2025-02-13T15:53:50.508834730Z" level=info msg="TearDown network for sandbox \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" successfully"
Feb 13 15:53:50.509029 containerd[1477]: time="2025-02-13T15:53:50.508854492Z" level=info msg="StopPodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" returns successfully"
Feb 13 15:53:50.511239 containerd[1477]: time="2025-02-13T15:53:50.509668217Z" level=info msg="RemovePodSandbox for \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\""
Feb 13 15:53:50.511239 containerd[1477]: time="2025-02-13T15:53:50.509701232Z" level=info msg="Forcibly stopping sandbox \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\""
Feb 13 15:53:50.511239 containerd[1477]: time="2025-02-13T15:53:50.509777579Z" level=info msg="TearDown network for sandbox \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" successfully"
Feb 13 15:53:50.513260 containerd[1477]: time="2025-02-13T15:53:50.513056053Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:53:50.513260 containerd[1477]: time="2025-02-13T15:53:50.513126399Z" level=info msg="RemovePodSandbox \"91e8fb9eebac04c1f2b1c998a06be52079ae66d96d7ddca80da80e39f6355aac\" returns successfully"
Feb 13 15:53:50.513707 containerd[1477]: time="2025-02-13T15:53:50.513664794Z" level=info msg="StopPodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\""
Feb 13 15:53:50.513843 containerd[1477]: time="2025-02-13T15:53:50.513822450Z" level=info msg="TearDown network for sandbox \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" successfully"
Feb 13 15:53:50.513904 containerd[1477]: time="2025-02-13T15:53:50.513842368Z" level=info msg="StopPodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" returns successfully"
Feb 13 15:53:50.514634 containerd[1477]: time="2025-02-13T15:53:50.514550269Z" level=info msg="RemovePodSandbox for \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\""
Feb 13 15:53:50.514634 containerd[1477]: time="2025-02-13T15:53:50.514614712Z" level=info msg="Forcibly stopping sandbox \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\""
Feb 13 15:53:50.514787 containerd[1477]: time="2025-02-13T15:53:50.514696599Z" level=info msg="TearDown network for sandbox \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" successfully"
Feb 13 15:53:50.518747 containerd[1477]: time="2025-02-13T15:53:50.518600895Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:53:50.518747 containerd[1477]: time="2025-02-13T15:53:50.518736499Z" level=info msg="RemovePodSandbox \"0e67601d495132bd3f595476e39aa342c593fb751c83b14444dcdc9bcd2ac8e9\" returns successfully"
Feb 13 15:53:50.520193 containerd[1477]: time="2025-02-13T15:53:50.519599443Z" level=info msg="StopPodSandbox for \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\""
Feb 13 15:53:50.520193 containerd[1477]: time="2025-02-13T15:53:50.519786657Z" level=info msg="TearDown network for sandbox \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\" successfully"
Feb 13 15:53:50.520193 containerd[1477]: time="2025-02-13T15:53:50.519812954Z" level=info msg="StopPodSandbox for \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\" returns successfully"
Feb 13 15:53:50.520667 containerd[1477]: time="2025-02-13T15:53:50.520631889Z" level=info msg="RemovePodSandbox for \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\""
Feb 13 15:53:50.520806 containerd[1477]: time="2025-02-13T15:53:50.520783690Z" level=info msg="Forcibly stopping sandbox \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\""
Feb 13 15:53:50.522732 containerd[1477]: time="2025-02-13T15:53:50.521058624Z" level=info msg="TearDown network for sandbox \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\" successfully"
Feb 13 15:53:50.529360 containerd[1477]: time="2025-02-13T15:53:50.528165293Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:53:50.529360 containerd[1477]: time="2025-02-13T15:53:50.528253204Z" level=info msg="RemovePodSandbox \"cfd15060c5fce3b900511e6a49c88b879b467a5ced7c135b2b24e74234806ebc\" returns successfully"
Feb 13 15:53:50.541287 containerd[1477]: time="2025-02-13T15:53:50.541029338Z" level=info msg="StopPodSandbox for \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\""
Feb 13 15:53:50.541954 containerd[1477]: time="2025-02-13T15:53:50.541487238Z" level=info msg="TearDown network for sandbox \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\" successfully"
Feb 13 15:53:50.541954 containerd[1477]: time="2025-02-13T15:53:50.541516667Z" level=info msg="StopPodSandbox for \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\" returns successfully"
Feb 13 15:53:50.543490 containerd[1477]: time="2025-02-13T15:53:50.543452050Z" level=info msg="RemovePodSandbox for \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\""
Feb 13 15:53:50.545233 containerd[1477]: time="2025-02-13T15:53:50.544034253Z" level=info msg="Forcibly stopping sandbox \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\""
Feb 13 15:53:50.545233 containerd[1477]: time="2025-02-13T15:53:50.544204132Z" level=info msg="TearDown network for sandbox \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\" successfully"
Feb 13 15:53:50.548173 containerd[1477]: time="2025-02-13T15:53:50.548091653Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:53:50.548778 containerd[1477]: time="2025-02-13T15:53:50.548745512Z" level=info msg="RemovePodSandbox \"61d1b6fcd6630fad35834672ccda7e74bbcc7bb1b053f762b1fa34c6281c84d6\" returns successfully"
Feb 13 15:53:50.549500 containerd[1477]: time="2025-02-13T15:53:50.549466116Z" level=info msg="StopPodSandbox for \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\""
Feb 13 15:53:50.549783 containerd[1477]: time="2025-02-13T15:53:50.549757300Z" level=info msg="TearDown network for sandbox \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\" successfully"
Feb 13 15:53:50.549875 containerd[1477]: time="2025-02-13T15:53:50.549857186Z" level=info msg="StopPodSandbox for \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\" returns successfully"
Feb 13 15:53:50.550706 containerd[1477]: time="2025-02-13T15:53:50.550669980Z" level=info msg="RemovePodSandbox for \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\""
Feb 13 15:53:50.550861 containerd[1477]: time="2025-02-13T15:53:50.550840219Z" level=info msg="Forcibly stopping sandbox \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\""
Feb 13 15:53:50.551172 containerd[1477]: time="2025-02-13T15:53:50.551078899Z" level=info msg="TearDown network for sandbox \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\" successfully"
Feb 13 15:53:50.555298 containerd[1477]: time="2025-02-13T15:53:50.555247426Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:53:50.555462 containerd[1477]: time="2025-02-13T15:53:50.555311349Z" level=info msg="RemovePodSandbox \"df430f152d90f640b9a1937520f75514e30c7f3f32935c4153c427b2a72616c6\" returns successfully"
Feb 13 15:53:50.561825 containerd[1477]: time="2025-02-13T15:53:50.561718796Z" level=info msg="StopPodSandbox for \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\""
Feb 13 15:53:50.562362 containerd[1477]: time="2025-02-13T15:53:50.562233600Z" level=info msg="TearDown network for sandbox \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\" successfully"
Feb 13 15:53:50.562362 containerd[1477]: time="2025-02-13T15:53:50.562269261Z" level=info msg="StopPodSandbox for \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\" returns successfully"
Feb 13 15:53:50.571055 containerd[1477]: time="2025-02-13T15:53:50.571000826Z" level=info msg="RemovePodSandbox for \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\""
Feb 13 15:53:50.571055 containerd[1477]: time="2025-02-13T15:53:50.571052617Z" level=info msg="Forcibly stopping sandbox \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\""
Feb 13 15:53:50.571576 containerd[1477]: time="2025-02-13T15:53:50.571198954Z" level=info msg="TearDown network for sandbox \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\" successfully"
Feb 13 15:53:50.584506 containerd[1477]: time="2025-02-13T15:53:50.584208013Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:53:50.584506 containerd[1477]: time="2025-02-13T15:53:50.584304055Z" level=info msg="RemovePodSandbox \"219aed8161f71b0dd73507a111cc1a92afdbee7e986199d30f64c5b3ce6836ff\" returns successfully"
Feb 13 15:53:50.585407 containerd[1477]: time="2025-02-13T15:53:50.585196463Z" level=info msg="StopPodSandbox for \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\""
Feb 13 15:53:50.585407 containerd[1477]: time="2025-02-13T15:53:50.585309519Z" level=info msg="TearDown network for sandbox \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\" successfully"
Feb 13 15:53:50.585407 containerd[1477]: time="2025-02-13T15:53:50.585320031Z" level=info msg="StopPodSandbox for \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\" returns successfully"
Feb 13 15:53:50.587089 containerd[1477]: time="2025-02-13T15:53:50.585761870Z" level=info msg="RemovePodSandbox for \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\""
Feb 13 15:53:50.587089 containerd[1477]: time="2025-02-13T15:53:50.585790098Z" level=info msg="Forcibly stopping sandbox \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\""
Feb 13 15:53:50.587089 containerd[1477]: time="2025-02-13T15:53:50.585862392Z" level=info msg="TearDown network for sandbox \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\" successfully"
Feb 13 15:53:50.589744 containerd[1477]: time="2025-02-13T15:53:50.589676920Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:53:50.590275 containerd[1477]: time="2025-02-13T15:53:50.590238453Z" level=info msg="RemovePodSandbox \"c581f4ebb53aa784438ac16f2033d0280d03d818905790755ef008e157aab208\" returns successfully"
Feb 13 15:53:50.591370 containerd[1477]: time="2025-02-13T15:53:50.591030964Z" level=info msg="StopPodSandbox for \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\""
Feb 13 15:53:50.591524 containerd[1477]: time="2025-02-13T15:53:50.591499062Z" level=info msg="TearDown network for sandbox \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\" successfully"
Feb 13 15:53:50.591573 containerd[1477]: time="2025-02-13T15:53:50.591527211Z" level=info msg="StopPodSandbox for \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\" returns successfully"
Feb 13 15:53:50.592847 containerd[1477]: time="2025-02-13T15:53:50.591950900Z" level=info msg="RemovePodSandbox for \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\""
Feb 13 15:53:50.592847 containerd[1477]: time="2025-02-13T15:53:50.591986240Z" level=info msg="Forcibly stopping sandbox \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\""
Feb 13 15:53:50.592847 containerd[1477]: time="2025-02-13T15:53:50.592071586Z" level=info msg="TearDown network for sandbox \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\" successfully"
Feb 13 15:53:50.610184 containerd[1477]: time="2025-02-13T15:53:50.607580072Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:53:50.610549 containerd[1477]: time="2025-02-13T15:53:50.610460970Z" level=info msg="RemovePodSandbox \"0dfa673d81ea2635d04ee5acbb072124817c74125a5e7eb7f90bc4f0eff7509d\" returns successfully"
Feb 13 15:53:50.611864 containerd[1477]: time="2025-02-13T15:53:50.611563209Z" level=info msg="StopPodSandbox for \"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\""
Feb 13 15:53:50.612852 containerd[1477]: time="2025-02-13T15:53:50.612596465Z" level=info msg="TearDown network for sandbox \"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\" successfully"
Feb 13 15:53:50.613894 containerd[1477]: time="2025-02-13T15:53:50.613333583Z" level=info msg="StopPodSandbox for \"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\" returns successfully"
Feb 13 15:53:50.614645 containerd[1477]: time="2025-02-13T15:53:50.614603666Z" level=info msg="RemovePodSandbox for \"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\""
Feb 13 15:53:50.614744 containerd[1477]: time="2025-02-13T15:53:50.614658786Z" level=info msg="Forcibly stopping sandbox \"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\""
Feb 13 15:53:50.615128 containerd[1477]: time="2025-02-13T15:53:50.614813128Z" level=info msg="TearDown network for sandbox \"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\" successfully"
Feb 13 15:53:50.619613 containerd[1477]: time="2025-02-13T15:53:50.619522177Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:53:50.619613 containerd[1477]: time="2025-02-13T15:53:50.619616376Z" level=info msg="RemovePodSandbox \"6749b69aae82ed37d3b90181e7b0d1c4dec7865ea29fbab7316f5cacb312b469\" returns successfully"
Feb 13 15:53:50.620270 containerd[1477]: time="2025-02-13T15:53:50.620231429Z" level=info msg="StopPodSandbox for \"c06a9e5cbcae0f61af3681b324edb8645c0b75a5ebe4244d0fa985ae54cf822d\""
Feb 13 15:53:50.620447 containerd[1477]: time="2025-02-13T15:53:50.620420916Z" level=info msg="TearDown network for sandbox \"c06a9e5cbcae0f61af3681b324edb8645c0b75a5ebe4244d0fa985ae54cf822d\" successfully"
Feb 13 15:53:50.620519 containerd[1477]: time="2025-02-13T15:53:50.620443269Z" level=info msg="StopPodSandbox for \"c06a9e5cbcae0f61af3681b324edb8645c0b75a5ebe4244d0fa985ae54cf822d\" returns successfully"
Feb 13 15:53:50.621257 containerd[1477]: time="2025-02-13T15:53:50.621028791Z" level=info msg="RemovePodSandbox for \"c06a9e5cbcae0f61af3681b324edb8645c0b75a5ebe4244d0fa985ae54cf822d\""
Feb 13 15:53:50.621257 containerd[1477]: time="2025-02-13T15:53:50.621071991Z" level=info msg="Forcibly stopping sandbox \"c06a9e5cbcae0f61af3681b324edb8645c0b75a5ebe4244d0fa985ae54cf822d\""
Feb 13 15:53:50.621703 containerd[1477]: time="2025-02-13T15:53:50.621544874Z" level=info msg="TearDown network for sandbox \"c06a9e5cbcae0f61af3681b324edb8645c0b75a5ebe4244d0fa985ae54cf822d\" successfully"
Feb 13 15:53:50.625735 containerd[1477]: time="2025-02-13T15:53:50.625668359Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c06a9e5cbcae0f61af3681b324edb8645c0b75a5ebe4244d0fa985ae54cf822d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:53:50.626173 containerd[1477]: time="2025-02-13T15:53:50.625979437Z" level=info msg="RemovePodSandbox \"c06a9e5cbcae0f61af3681b324edb8645c0b75a5ebe4244d0fa985ae54cf822d\" returns successfully"
Feb 13 15:53:51.441055 kubelet[1817]: E0213 15:53:51.440945    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:52.441795 kubelet[1817]: E0213 15:53:52.441721    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:53.442566 kubelet[1817]: E0213 15:53:53.442475    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:54.443609 kubelet[1817]: E0213 15:53:54.443496    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:55.444094 kubelet[1817]: E0213 15:53:55.444031    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:55.953184 kubelet[1817]: I0213 15:53:55.952200    1817 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.405399887 podStartE2EDuration="18.95212157s" podCreationTimestamp="2025-02-13 15:53:37 +0000 UTC" firstStartedPulling="2025-02-13 15:53:38.598330242 +0000 UTC m=+48.825972311" lastFinishedPulling="2025-02-13 15:53:46.145051945 +0000 UTC m=+56.372693994" observedRunningTime="2025-02-13 15:53:47.147588618 +0000 UTC m=+57.375230686" watchObservedRunningTime="2025-02-13 15:53:55.95212157 +0000 UTC m=+66.179763631"
Feb 13 15:53:55.953184 kubelet[1817]: I0213 15:53:55.952336    1817 topology_manager.go:215] "Topology Admit Handler" podUID="c8d62950-85da-47fb-b293-7c48d63dc8c2" podNamespace="default" podName="test-pod-1"
Feb 13 15:53:55.966753 systemd[1]: Created slice kubepods-besteffort-podc8d62950_85da_47fb_b293_7c48d63dc8c2.slice - libcontainer container kubepods-besteffort-podc8d62950_85da_47fb_b293_7c48d63dc8c2.slice.
Feb 13 15:53:56.113918 kubelet[1817]: I0213 15:53:56.113837    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdgkq\" (UniqueName: \"kubernetes.io/projected/c8d62950-85da-47fb-b293-7c48d63dc8c2-kube-api-access-xdgkq\") pod \"test-pod-1\" (UID: \"c8d62950-85da-47fb-b293-7c48d63dc8c2\") " pod="default/test-pod-1"
Feb 13 15:53:56.114762 kubelet[1817]: I0213 15:53:56.114697    1817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-782313db-0e11-4620-a416-e1a12761bba4\" (UniqueName: \"kubernetes.io/nfs/c8d62950-85da-47fb-b293-7c48d63dc8c2-pvc-782313db-0e11-4620-a416-e1a12761bba4\") pod \"test-pod-1\" (UID: \"c8d62950-85da-47fb-b293-7c48d63dc8c2\") " pod="default/test-pod-1"
Feb 13 15:53:56.271442 kernel: FS-Cache: Loaded
Feb 13 15:53:56.383678 kernel: RPC: Registered named UNIX socket transport module.
Feb 13 15:53:56.383857 kernel: RPC: Registered udp transport module.
Feb 13 15:53:56.383893 kernel: RPC: Registered tcp transport module.
Feb 13 15:53:56.383929 kernel: RPC: Registered tcp-with-tls transport module.
Feb 13 15:53:56.383965 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb 13 15:53:56.445692 kubelet[1817]: E0213 15:53:56.444353    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:56.697413 kernel: NFS: Registering the id_resolver key type
Feb 13 15:53:56.697631 kernel: Key type id_resolver registered
Feb 13 15:53:56.701474 kernel: Key type id_legacy registered
Feb 13 15:53:56.769893 nfsidmap[3834]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.1-b-9620dd7e41'
Feb 13 15:53:56.780678 nfsidmap[3835]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.1-b-9620dd7e41'
Feb 13 15:53:56.871382 containerd[1477]: time="2025-02-13T15:53:56.871198292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c8d62950-85da-47fb-b293-7c48d63dc8c2,Namespace:default,Attempt:0,}"
Feb 13 15:53:57.193921 systemd-networkd[1374]: cali5ec59c6bf6e: Link UP
Feb 13 15:53:57.194607 systemd-networkd[1374]: cali5ec59c6bf6e: Gained carrier
Feb 13 15:53:57.218783 containerd[1477]: 2025-02-13 15:53:56.977 [INFO][3839] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.110.144.28-k8s-test--pod--1-eth0  default  c8d62950-85da-47fb-b293-7c48d63dc8c2 1419 0 2025-02-13 15:53:40 +0000 UTC <nil> <nil> map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s  143.110.144.28  test-pod-1 eth0 default [] []   [kns.default ksa.default.default] cali5ec59c6bf6e  [] []}} ContainerID="7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.110.144.28-k8s-test--pod--1-"
Feb 13 15:53:57.218783 containerd[1477]: 2025-02-13 15:53:56.978 [INFO][3839] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.110.144.28-k8s-test--pod--1-eth0"
Feb 13 15:53:57.218783 containerd[1477]: 2025-02-13 15:53:57.045 [INFO][3850] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b" HandleID="k8s-pod-network.7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b" Workload="143.110.144.28-k8s-test--pod--1-eth0"
Feb 13 15:53:57.218783 containerd[1477]: 2025-02-13 15:53:57.083 [INFO][3850] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b" HandleID="k8s-pod-network.7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b" Workload="143.110.144.28-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292350), Attrs:map[string]string{"namespace":"default", "node":"143.110.144.28", "pod":"test-pod-1", "timestamp":"2025-02-13 15:53:57.045455934 +0000 UTC"}, Hostname:"143.110.144.28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 15:53:57.218783 containerd[1477]: 2025-02-13 15:53:57.083 [INFO][3850] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 15:53:57.218783 containerd[1477]: 2025-02-13 15:53:57.084 [INFO][3850] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 15:53:57.218783 containerd[1477]: 2025-02-13 15:53:57.084 [INFO][3850] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.110.144.28'
Feb 13 15:53:57.218783 containerd[1477]: 2025-02-13 15:53:57.089 [INFO][3850] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b" host="143.110.144.28"
Feb 13 15:53:57.218783 containerd[1477]: 2025-02-13 15:53:57.108 [INFO][3850] ipam/ipam.go 372: Looking up existing affinities for host host="143.110.144.28"
Feb 13 15:53:57.218783 containerd[1477]: 2025-02-13 15:53:57.121 [INFO][3850] ipam/ipam.go 489: Trying affinity for 192.168.91.0/26 host="143.110.144.28"
Feb 13 15:53:57.218783 containerd[1477]: 2025-02-13 15:53:57.127 [INFO][3850] ipam/ipam.go 155: Attempting to load block cidr=192.168.91.0/26 host="143.110.144.28"
Feb 13 15:53:57.218783 containerd[1477]: 2025-02-13 15:53:57.137 [INFO][3850] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="143.110.144.28"
Feb 13 15:53:57.218783 containerd[1477]: 2025-02-13 15:53:57.137 [INFO][3850] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b" host="143.110.144.28"
Feb 13 15:53:57.218783 containerd[1477]: 2025-02-13 15:53:57.151 [INFO][3850] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b
Feb 13 15:53:57.218783 containerd[1477]: 2025-02-13 15:53:57.167 [INFO][3850] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b" host="143.110.144.28"
Feb 13 15:53:57.218783 containerd[1477]: 2025-02-13 15:53:57.183 [INFO][3850] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.91.4/26] block=192.168.91.0/26 handle="k8s-pod-network.7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b" host="143.110.144.28"
Feb 13 15:53:57.218783 containerd[1477]: 2025-02-13 15:53:57.183 [INFO][3850] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.4/26] handle="k8s-pod-network.7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b" host="143.110.144.28"
Feb 13 15:53:57.218783 containerd[1477]: 2025-02-13 15:53:57.184 [INFO][3850] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 15:53:57.218783 containerd[1477]: 2025-02-13 15:53:57.184 [INFO][3850] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.4/26] IPv6=[] ContainerID="7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b" HandleID="k8s-pod-network.7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b" Workload="143.110.144.28-k8s-test--pod--1-eth0"
Feb 13 15:53:57.218783 containerd[1477]: 2025-02-13 15:53:57.186 [INFO][3839] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.110.144.28-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.110.144.28-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"c8d62950-85da-47fb-b293-7c48d63dc8c2", ResourceVersion:"1419", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 53, 40, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.110.144.28", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.91.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:53:57.220133 containerd[1477]: 2025-02-13 15:53:57.187 [INFO][3839] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.91.4/32] ContainerID="7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.110.144.28-k8s-test--pod--1-eth0"
Feb 13 15:53:57.220133 containerd[1477]: 2025-02-13 15:53:57.187 [INFO][3839] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.110.144.28-k8s-test--pod--1-eth0"
Feb 13 15:53:57.220133 containerd[1477]: 2025-02-13 15:53:57.193 [INFO][3839] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.110.144.28-k8s-test--pod--1-eth0"
Feb 13 15:53:57.220133 containerd[1477]: 2025-02-13 15:53:57.195 [INFO][3839] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.110.144.28-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.110.144.28-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"c8d62950-85da-47fb-b293-7c48d63dc8c2", ResourceVersion:"1419", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 53, 40, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.110.144.28", ContainerID:"7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.91.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"ba:fe:28:1d:28:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:53:57.220133 containerd[1477]: 2025-02-13 15:53:57.215 [INFO][3839] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.110.144.28-k8s-test--pod--1-eth0"
Feb 13 15:53:57.279309 containerd[1477]: time="2025-02-13T15:53:57.279064649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:53:57.279855 containerd[1477]: time="2025-02-13T15:53:57.279551642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:53:57.279855 containerd[1477]: time="2025-02-13T15:53:57.279615008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:53:57.281036 containerd[1477]: time="2025-02-13T15:53:57.280573148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:53:57.321449 systemd[1]: run-containerd-runc-k8s.io-7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b-runc.hfZYov.mount: Deactivated successfully.
Feb 13 15:53:57.334650 systemd[1]: Started cri-containerd-7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b.scope - libcontainer container 7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b.
Feb 13 15:53:57.393683 containerd[1477]: time="2025-02-13T15:53:57.393625555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c8d62950-85da-47fb-b293-7c48d63dc8c2,Namespace:default,Attempt:0,} returns sandbox id \"7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b\""
Feb 13 15:53:57.396582 containerd[1477]: time="2025-02-13T15:53:57.396534872Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb 13 15:53:57.445792 kubelet[1817]: E0213 15:53:57.445605    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:57.831508 containerd[1477]: time="2025-02-13T15:53:57.831320371Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61"
Feb 13 15:53:57.833701 containerd[1477]: time="2025-02-13T15:53:57.833243336Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:53:57.834713 containerd[1477]: time="2025-02-13T15:53:57.834658784Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 438.039524ms"
Feb 13 15:53:57.834986 containerd[1477]: time="2025-02-13T15:53:57.834849048Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\""
Feb 13 15:53:57.837742 containerd[1477]: time="2025-02-13T15:53:57.837692529Z" level=info msg="CreateContainer within sandbox \"7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b\" for container &ContainerMetadata{Name:test,Attempt:0,}"
Feb 13 15:53:57.860043 containerd[1477]: time="2025-02-13T15:53:57.859952359Z" level=info msg="CreateContainer within sandbox \"7a2f84e30669ce0a2b078de1405437a1c4e749742a88b79c59d90e33ad8e8f8b\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"7cbb8a2a3cc093890ec2f70d702eacc1151e6cee223af32eeaa2b45f9ec19f9b\""
Feb 13 15:53:57.861329 containerd[1477]: time="2025-02-13T15:53:57.861245420Z" level=info msg="StartContainer for \"7cbb8a2a3cc093890ec2f70d702eacc1151e6cee223af32eeaa2b45f9ec19f9b\""
Feb 13 15:53:57.899535 systemd[1]: Started cri-containerd-7cbb8a2a3cc093890ec2f70d702eacc1151e6cee223af32eeaa2b45f9ec19f9b.scope - libcontainer container 7cbb8a2a3cc093890ec2f70d702eacc1151e6cee223af32eeaa2b45f9ec19f9b.
Feb 13 15:53:57.945282 containerd[1477]: time="2025-02-13T15:53:57.944650774Z" level=info msg="StartContainer for \"7cbb8a2a3cc093890ec2f70d702eacc1151e6cee223af32eeaa2b45f9ec19f9b\" returns successfully"
Feb 13 15:53:58.004696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2822917641.mount: Deactivated successfully.
Feb 13 15:53:58.446869 kubelet[1817]: E0213 15:53:58.446783    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:58.448621 systemd-networkd[1374]: cali5ec59c6bf6e: Gained IPv6LL
Feb 13 15:53:59.447094 kubelet[1817]: E0213 15:53:59.447029    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:00.447877 kubelet[1817]: E0213 15:54:00.447785    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:01.448195 kubelet[1817]: E0213 15:54:01.448088    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:01.511024 kubelet[1817]: E0213 15:54:01.510313    1817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"
Feb 13 15:54:02.449792 kubelet[1817]: E0213 15:54:02.449589    1817 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"