Feb 13 15:51:14.112880 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:06:02 -00 2025
Feb 13 15:51:14.112925 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05
Feb 13 15:51:14.112941 kernel: BIOS-provided physical RAM map:
Feb 13 15:51:14.112953 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb 13 15:51:14.113013 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb 13 15:51:14.113024 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb 13 15:51:14.113041 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable
Feb 13 15:51:14.113051 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved
Feb 13 15:51:14.113062 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved
Feb 13 15:51:14.113378 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb 13 15:51:14.113405 kernel: NX (Execute Disable) protection: active
Feb 13 15:51:14.113419 kernel: APIC: Static calls initialized
Feb 13 15:51:14.113434 kernel: SMBIOS 2.7 present.
Feb 13 15:51:14.113449 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017
Feb 13 15:51:14.113473 kernel: Hypervisor detected: KVM
Feb 13 15:51:14.113489 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb 13 15:51:14.113505 kernel: kvm-clock: using sched offset of 9878415304 cycles
Feb 13 15:51:14.113523 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb 13 15:51:14.113539 kernel: tsc: Detected 2499.998 MHz processor
Feb 13 15:51:14.113556 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb 13 15:51:14.113572 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb 13 15:51:14.113593 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000
Feb 13 15:51:14.113609 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Feb 13 15:51:14.113625 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb 13 15:51:14.113642 kernel: Using GB pages for direct mapping
Feb 13 15:51:14.113692 kernel: ACPI: Early table checksum verification disabled
Feb 13 15:51:14.113709 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON)
Feb 13 15:51:14.113726 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001)
Feb 13 15:51:14.113743 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001)
Feb 13 15:51:14.113759 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001)
Feb 13 15:51:14.113779 kernel: ACPI: FACS 0x000000007D9EFF40 000040
Feb 13 15:51:14.113796 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001)
Feb 13 15:51:14.113812 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001)
Feb 13 15:51:14.113829 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001)
Feb 13 15:51:14.113846 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001)
Feb 13 15:51:14.113862 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001)
Feb 13 15:51:14.113878 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001)
Feb 13 15:51:14.113977 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001)
Feb 13 15:51:14.113995 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3]
Feb 13 15:51:14.114017 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488]
Feb 13 15:51:14.114041 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f]
Feb 13 15:51:14.114058 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39]
Feb 13 15:51:14.114075 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645]
Feb 13 15:51:14.114092 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf]
Feb 13 15:51:14.114112 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b]
Feb 13 15:51:14.114130 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7]
Feb 13 15:51:14.114148 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037]
Feb 13 15:51:14.114165 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba]
Feb 13 15:51:14.114181 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Feb 13 15:51:14.114199 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Feb 13 15:51:14.114216 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff]
Feb 13 15:51:14.114232 kernel: NUMA: Initialized distance table, cnt=1
Feb 13 15:51:14.114249 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff]
Feb 13 15:51:14.114270 kernel: Zone ranges:
Feb 13 15:51:14.114288 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb 13 15:51:14.114305 kernel:   DMA32    [mem 0x0000000001000000-0x000000007d9e9fff]
Feb 13 15:51:14.114323 kernel:   Normal   empty
Feb 13 15:51:14.114340 kernel: Movable zone start for each node
Feb 13 15:51:14.114357 kernel: Early memory node ranges
Feb 13 15:51:14.114373 kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb 13 15:51:14.114391 kernel:   node   0: [mem 0x0000000000100000-0x000000007d9e9fff]
Feb 13 15:51:14.114407 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff]
Feb 13 15:51:14.114425 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb 13 15:51:14.114446 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb 13 15:51:14.114463 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges
Feb 13 15:51:14.114481 kernel: ACPI: PM-Timer IO Port: 0xb008
Feb 13 15:51:14.114499 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb 13 15:51:14.114515 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23
Feb 13 15:51:14.114604 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb 13 15:51:14.114618 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb 13 15:51:14.114630 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb 13 15:51:14.114642 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb 13 15:51:14.114675 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb 13 15:51:14.114687 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Feb 13 15:51:14.114699 kernel: TSC deadline timer available
Feb 13 15:51:14.114712 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Feb 13 15:51:14.114726 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Feb 13 15:51:14.114740 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices
Feb 13 15:51:14.114754 kernel: Booting paravirtualized kernel on KVM
Feb 13 15:51:14.114767 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb 13 15:51:14.114782 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1
Feb 13 15:51:14.114798 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576
Feb 13 15:51:14.114811 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152
Feb 13 15:51:14.114914 kernel: pcpu-alloc: [0] 0 1 
Feb 13 15:51:14.114927 kernel: kvm-guest: PV spinlocks enabled
Feb 13 15:51:14.114940 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Feb 13 15:51:14.114957 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05
Feb 13 15:51:14.114972 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 13 15:51:14.114984 kernel: random: crng init done
Feb 13 15:51:14.115000 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb 13 15:51:14.115015 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb 13 15:51:14.115030 kernel: Fallback order for Node 0: 0 
Feb 13 15:51:14.115048 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 506242
Feb 13 15:51:14.115066 kernel: Policy zone: DMA32
Feb 13 15:51:14.115085 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 13 15:51:14.115105 kernel: Memory: 1930296K/2057760K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 127204K reserved, 0K cma-reserved)
Feb 13 15:51:14.115124 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Feb 13 15:51:14.115139 kernel: Kernel/User page tables isolation: enabled
Feb 13 15:51:14.115156 kernel: ftrace: allocating 37890 entries in 149 pages
Feb 13 15:51:14.115169 kernel: ftrace: allocated 149 pages with 4 groups
Feb 13 15:51:14.115182 kernel: Dynamic Preempt: voluntary
Feb 13 15:51:14.115195 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 13 15:51:14.115216 kernel: rcu:         RCU event tracing is enabled.
Feb 13 15:51:14.115231 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Feb 13 15:51:14.115246 kernel:         Trampoline variant of Tasks RCU enabled.
Feb 13 15:51:14.115261 kernel:         Rude variant of Tasks RCU enabled.
Feb 13 15:51:14.115275 kernel:         Tracing variant of Tasks RCU enabled.
Feb 13 15:51:14.115294 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 13 15:51:14.115309 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Feb 13 15:51:14.115325 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
Feb 13 15:51:14.115340 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 13 15:51:14.115355 kernel: Console: colour VGA+ 80x25
Feb 13 15:51:14.115369 kernel: printk: console [ttyS0] enabled
Feb 13 15:51:14.115385 kernel: ACPI: Core revision 20230628
Feb 13 15:51:14.115400 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns
Feb 13 15:51:14.115415 kernel: APIC: Switch to symmetric I/O mode setup
Feb 13 15:51:14.115434 kernel: x2apic enabled
Feb 13 15:51:14.115450 kernel: APIC: Switched APIC routing to: physical x2apic
Feb 13 15:51:14.115479 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns
Feb 13 15:51:14.115499 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998)
Feb 13 15:51:14.115515 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
Feb 13 15:51:14.115531 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
Feb 13 15:51:14.115547 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb 13 15:51:14.115562 kernel: Spectre V2 : Mitigation: Retpolines
Feb 13 15:51:14.115578 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Feb 13 15:51:14.115594 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Feb 13 15:51:14.115610 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
Feb 13 15:51:14.115626 kernel: RETBleed: Vulnerable
Feb 13 15:51:14.115642 kernel: Speculative Store Bypass: Vulnerable
Feb 13 15:51:14.115676 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode
Feb 13 15:51:14.115691 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Feb 13 15:51:14.115707 kernel: GDS: Unknown: Dependent on hypervisor status
Feb 13 15:51:14.115722 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 13 15:51:14.115738 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 13 15:51:14.115755 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 13 15:51:14.115774 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers'
Feb 13 15:51:14.115790 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR'
Feb 13 15:51:14.115806 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
Feb 13 15:51:14.115822 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
Feb 13 15:51:14.115838 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
Feb 13 15:51:14.115855 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Feb 13 15:51:14.115871 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 13 15:51:14.115887 kernel: x86/fpu: xstate_offset[3]:  832, xstate_sizes[3]:   64
Feb 13 15:51:14.115902 kernel: x86/fpu: xstate_offset[4]:  896, xstate_sizes[4]:   64
Feb 13 15:51:14.115918 kernel: x86/fpu: xstate_offset[5]:  960, xstate_sizes[5]:   64
Feb 13 15:51:14.115934 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]:  512
Feb 13 15:51:14.115954 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024
Feb 13 15:51:14.115969 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]:    8
Feb 13 15:51:14.115985 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format.
Feb 13 15:51:14.116001 kernel: Freeing SMP alternatives memory: 32K
Feb 13 15:51:14.116016 kernel: pid_max: default: 32768 minimum: 301
Feb 13 15:51:14.116032 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Feb 13 15:51:14.116048 kernel: landlock: Up and running.
Feb 13 15:51:14.116063 kernel: SELinux:  Initializing.
Feb 13 15:51:14.116080 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Feb 13 15:51:14.116096 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Feb 13 15:51:14.116113 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4)
Feb 13 15:51:14.116134 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 15:51:14.116150 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 15:51:14.116167 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 15:51:14.116183 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only.
Feb 13 15:51:14.116198 kernel: signal: max sigframe size: 3632
Feb 13 15:51:14.116214 kernel: rcu: Hierarchical SRCU implementation.
Feb 13 15:51:14.116231 kernel: rcu:         Max phase no-delay instances is 400.
Feb 13 15:51:14.116247 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Feb 13 15:51:14.116264 kernel: smp: Bringing up secondary CPUs ...
Feb 13 15:51:14.116284 kernel: smpboot: x86: Booting SMP configuration:
Feb 13 15:51:14.116301 kernel: .... node  #0, CPUs:      #1
Feb 13 15:51:14.116318 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
Feb 13 15:51:14.116336 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
Feb 13 15:51:14.116352 kernel: smp: Brought up 1 node, 2 CPUs
Feb 13 15:51:14.116368 kernel: smpboot: Max logical packages: 1
Feb 13 15:51:14.116384 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS)
Feb 13 15:51:14.116400 kernel: devtmpfs: initialized
Feb 13 15:51:14.116415 kernel: x86/mm: Memory block size: 128MB
Feb 13 15:51:14.116435 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 13 15:51:14.116450 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Feb 13 15:51:14.116465 kernel: pinctrl core: initialized pinctrl subsystem
Feb 13 15:51:14.116481 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 13 15:51:14.116497 kernel: audit: initializing netlink subsys (disabled)
Feb 13 15:51:14.116513 kernel: audit: type=2000 audit(1739461872.964:1): state=initialized audit_enabled=0 res=1
Feb 13 15:51:14.116526 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 13 15:51:14.116540 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb 13 15:51:14.116554 kernel: cpuidle: using governor menu
Feb 13 15:51:14.116573 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 13 15:51:14.116587 kernel: dca service started, version 1.12.1
Feb 13 15:51:14.116602 kernel: PCI: Using configuration type 1 for base access
Feb 13 15:51:14.116617 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb 13 15:51:14.116631 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 13 15:51:14.116645 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Feb 13 15:51:14.116675 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 13 15:51:14.116687 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Feb 13 15:51:14.116700 kernel: ACPI: Added _OSI(Module Device)
Feb 13 15:51:14.116718 kernel: ACPI: Added _OSI(Processor Device)
Feb 13 15:51:14.116732 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 13 15:51:14.116746 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 13 15:51:14.116761 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded
Feb 13 15:51:14.116776 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Feb 13 15:51:14.116791 kernel: ACPI: Interpreter enabled
Feb 13 15:51:14.116806 kernel: ACPI: PM: (supports S0 S5)
Feb 13 15:51:14.116823 kernel: ACPI: Using IOAPIC for interrupt routing
Feb 13 15:51:14.116836 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb 13 15:51:14.116854 kernel: PCI: Using E820 reservations for host bridge windows
Feb 13 15:51:14.116866 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F
Feb 13 15:51:14.116880 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 13 15:51:14.117107 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
Feb 13 15:51:14.117251 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI]
Feb 13 15:51:14.117510 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
Feb 13 15:51:14.117540 kernel: acpiphp: Slot [3] registered
Feb 13 15:51:14.117566 kernel: acpiphp: Slot [4] registered
Feb 13 15:51:14.117587 kernel: acpiphp: Slot [5] registered
Feb 13 15:51:14.117608 kernel: acpiphp: Slot [6] registered
Feb 13 15:51:14.117626 kernel: acpiphp: Slot [7] registered
Feb 13 15:51:14.117641 kernel: acpiphp: Slot [8] registered
Feb 13 15:51:14.117682 kernel: acpiphp: Slot [9] registered
Feb 13 15:51:14.117695 kernel: acpiphp: Slot [10] registered
Feb 13 15:51:14.117709 kernel: acpiphp: Slot [11] registered
Feb 13 15:51:14.117723 kernel: acpiphp: Slot [12] registered
Feb 13 15:51:14.117742 kernel: acpiphp: Slot [13] registered
Feb 13 15:51:14.117758 kernel: acpiphp: Slot [14] registered
Feb 13 15:51:14.117773 kernel: acpiphp: Slot [15] registered
Feb 13 15:51:14.117789 kernel: acpiphp: Slot [16] registered
Feb 13 15:51:14.120296 kernel: acpiphp: Slot [17] registered
Feb 13 15:51:14.120315 kernel: acpiphp: Slot [18] registered
Feb 13 15:51:14.120331 kernel: acpiphp: Slot [19] registered
Feb 13 15:51:14.120346 kernel: acpiphp: Slot [20] registered
Feb 13 15:51:14.120362 kernel: acpiphp: Slot [21] registered
Feb 13 15:51:14.120378 kernel: acpiphp: Slot [22] registered
Feb 13 15:51:14.120400 kernel: acpiphp: Slot [23] registered
Feb 13 15:51:14.120416 kernel: acpiphp: Slot [24] registered
Feb 13 15:51:14.120432 kernel: acpiphp: Slot [25] registered
Feb 13 15:51:14.120448 kernel: acpiphp: Slot [26] registered
Feb 13 15:51:14.120465 kernel: acpiphp: Slot [27] registered
Feb 13 15:51:14.120481 kernel: acpiphp: Slot [28] registered
Feb 13 15:51:14.120495 kernel: acpiphp: Slot [29] registered
Feb 13 15:51:14.120512 kernel: acpiphp: Slot [30] registered
Feb 13 15:51:14.120528 kernel: acpiphp: Slot [31] registered
Feb 13 15:51:14.120548 kernel: PCI host bridge to bus 0000:00
Feb 13 15:51:14.120758 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb 13 15:51:14.120895 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb 13 15:51:14.121030 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb 13 15:51:14.121162 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window]
Feb 13 15:51:14.121394 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 13 15:51:14.121590 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Feb 13 15:51:14.121767 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Feb 13 15:51:14.124428 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000
Feb 13 15:51:14.124588 kernel: pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
Feb 13 15:51:14.124789 kernel: pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 SMB
Feb 13 15:51:14.124951 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff
Feb 13 15:51:14.125111 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff
Feb 13 15:51:14.125268 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff
Feb 13 15:51:14.125416 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff
Feb 13 15:51:14.125556 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff
Feb 13 15:51:14.125704 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff
Feb 13 15:51:14.125828 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 12695 usecs
Feb 13 15:51:14.125974 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000
Feb 13 15:51:14.126103 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref]
Feb 13 15:51:14.126329 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref]
Feb 13 15:51:14.126472 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb 13 15:51:14.126611 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802
Feb 13 15:51:14.126769 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff]
Feb 13 15:51:14.127050 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000
Feb 13 15:51:14.127210 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff]
Feb 13 15:51:14.127232 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb 13 15:51:14.127253 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb 13 15:51:14.127269 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb 13 15:51:14.127284 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb 13 15:51:14.127299 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb 13 15:51:14.127315 kernel: iommu: Default domain type: Translated
Feb 13 15:51:14.127330 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Feb 13 15:51:14.127344 kernel: PCI: Using ACPI for IRQ routing
Feb 13 15:51:14.127359 kernel: PCI: pci_cache_line_size set to 64 bytes
Feb 13 15:51:14.127373 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Feb 13 15:51:14.127391 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff]
Feb 13 15:51:14.127543 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device
Feb 13 15:51:14.127724 kernel: pci 0000:00:03.0: vgaarb: bridge control possible
Feb 13 15:51:14.127850 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb 13 15:51:14.127870 kernel: vgaarb: loaded
Feb 13 15:51:14.127886 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0
Feb 13 15:51:14.130455 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter
Feb 13 15:51:14.130489 kernel: clocksource: Switched to clocksource kvm-clock
Feb 13 15:51:14.130504 kernel: VFS: Disk quotas dquot_6.6.0
Feb 13 15:51:14.130594 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 13 15:51:14.130608 kernel: pnp: PnP ACPI init
Feb 13 15:51:14.130623 kernel: pnp: PnP ACPI: found 5 devices
Feb 13 15:51:14.130638 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb 13 15:51:14.130674 kernel: NET: Registered PF_INET protocol family
Feb 13 15:51:14.130690 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 13 15:51:14.130706 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
Feb 13 15:51:14.130721 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 13 15:51:14.130736 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 13 15:51:14.130755 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear)
Feb 13 15:51:14.130770 kernel: TCP: Hash tables configured (established 16384 bind 16384)
Feb 13 15:51:14.130785 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
Feb 13 15:51:14.130800 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
Feb 13 15:51:14.130815 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 13 15:51:14.130906 kernel: NET: Registered PF_XDP protocol family
Feb 13 15:51:14.131091 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb 13 15:51:14.131320 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb 13 15:51:14.131452 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb 13 15:51:14.131575 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window]
Feb 13 15:51:14.131758 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb 13 15:51:14.131780 kernel: PCI: CLS 0 bytes, default 64
Feb 13 15:51:14.131792 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Feb 13 15:51:14.131805 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns
Feb 13 15:51:14.131818 kernel: clocksource: Switched to clocksource tsc
Feb 13 15:51:14.131831 kernel: Initialise system trusted keyrings
Feb 13 15:51:14.131850 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0
Feb 13 15:51:14.131863 kernel: Key type asymmetric registered
Feb 13 15:51:14.131876 kernel: Asymmetric key parser 'x509' registered
Feb 13 15:51:14.131888 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
Feb 13 15:51:14.131901 kernel: io scheduler mq-deadline registered
Feb 13 15:51:14.131915 kernel: io scheduler kyber registered
Feb 13 15:51:14.131928 kernel: io scheduler bfq registered
Feb 13 15:51:14.131944 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Feb 13 15:51:14.131957 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 13 15:51:14.131976 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb 13 15:51:14.131990 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb 13 15:51:14.132003 kernel: i8042: Warning: Keylock active
Feb 13 15:51:14.132016 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb 13 15:51:14.132031 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb 13 15:51:14.132176 kernel: rtc_cmos 00:00: RTC can wake from S4
Feb 13 15:51:14.132297 kernel: rtc_cmos 00:00: registered as rtc0
Feb 13 15:51:14.132418 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T15:51:13 UTC (1739461873)
Feb 13 15:51:14.132761 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram
Feb 13 15:51:14.132786 kernel: intel_pstate: CPU model not supported
Feb 13 15:51:14.132801 kernel: NET: Registered PF_INET6 protocol family
Feb 13 15:51:14.132815 kernel: Segment Routing with IPv6
Feb 13 15:51:14.132829 kernel: In-situ OAM (IOAM) with IPv6
Feb 13 15:51:14.132842 kernel: NET: Registered PF_PACKET protocol family
Feb 13 15:51:14.132856 kernel: Key type dns_resolver registered
Feb 13 15:51:14.132869 kernel: IPI shorthand broadcast: enabled
Feb 13 15:51:14.132883 kernel: sched_clock: Marking stable (760138444, 299050670)->(1199921016, -140731902)
Feb 13 15:51:14.132903 kernel: registered taskstats version 1
Feb 13 15:51:14.132917 kernel: Loading compiled-in X.509 certificates
Feb 13 15:51:14.132931 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 3d19ae6dcd850c11d55bf09bd44e00c45ed399eb'
Feb 13 15:51:14.132944 kernel: Key type .fscrypt registered
Feb 13 15:51:14.132956 kernel: Key type fscrypt-provisioning registered
Feb 13 15:51:14.132971 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 13 15:51:14.132986 kernel: ima: Allocated hash algorithm: sha1
Feb 13 15:51:14.133000 kernel: ima: No architecture policies found
Feb 13 15:51:14.133013 kernel: clk: Disabling unused clocks
Feb 13 15:51:14.133031 kernel: Freeing unused kernel image (initmem) memory: 43320K
Feb 13 15:51:14.133043 kernel: Write protecting the kernel read-only data: 38912k
Feb 13 15:51:14.133057 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K
Feb 13 15:51:14.133072 kernel: Run /init as init process
Feb 13 15:51:14.133087 kernel:   with arguments:
Feb 13 15:51:14.133102 kernel:     /init
Feb 13 15:51:14.133117 kernel:   with environment:
Feb 13 15:51:14.133132 kernel:     HOME=/
Feb 13 15:51:14.133151 kernel:     TERM=linux
Feb 13 15:51:14.133175 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 13 15:51:14.133234 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 15:51:14.133254 systemd[1]: Detected virtualization amazon.
Feb 13 15:51:14.133270 systemd[1]: Detected architecture x86-64.
Feb 13 15:51:14.133285 systemd[1]: Running in initrd.
Feb 13 15:51:14.133302 systemd[1]: No hostname configured, using default hostname.
Feb 13 15:51:14.133633 systemd[1]: Hostname set to <localhost>.
Feb 13 15:51:14.133974 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 15:51:14.133995 systemd[1]: Queued start job for default target initrd.target.
Feb 13 15:51:14.134012 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:51:14.134029 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:51:14.134048 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Feb 13 15:51:14.134065 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:51:14.134083 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Feb 13 15:51:14.134101 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Feb 13 15:51:14.134127 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Feb 13 15:51:14.134144 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Feb 13 15:51:14.134162 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:51:14.134179 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:51:14.134197 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:51:14.134215 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:51:14.134233 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:51:14.134254 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:51:14.134271 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:51:14.134289 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:51:14.134307 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 15:51:14.134325 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Feb 13 15:51:14.134342 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:51:14.134360 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:51:14.134377 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:51:14.134399 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:51:14.134417 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Feb 13 15:51:14.134434 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Feb 13 15:51:14.134451 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:51:14.134470 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Feb 13 15:51:14.134494 systemd[1]: Starting systemd-fsck-usr.service...
Feb 13 15:51:14.134593 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:51:14.134612 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:51:14.134629 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:51:14.134735 systemd-journald[179]: Collecting audit messages is disabled.
Feb 13 15:51:14.134779 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Feb 13 15:51:14.134795 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:51:14.134813 systemd[1]: Finished systemd-fsck-usr.service.
Feb 13 15:51:14.134903 systemd-journald[179]: Journal started
Feb 13 15:51:14.134942 systemd-journald[179]: Runtime Journal (/run/log/journal/ec220ea845babbffa5e4a708c3954f30) is 4.8M, max 38.5M, 33.7M free.
Feb 13 15:51:14.150981 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 15:51:14.151066 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:51:14.169924 systemd-modules-load[180]: Inserted module 'overlay'
Feb 13 15:51:14.356712 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 13 15:51:14.356771 kernel: Bridge firewalling registered
Feb 13 15:51:14.252818 systemd-modules-load[180]: Inserted module 'br_netfilter'
Feb 13 15:51:14.360870 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:51:14.363088 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:51:14.367305 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 15:51:14.385438 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:51:14.408199 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:51:14.410883 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:51:14.414413 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:51:14.472243 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:51:14.473717 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:51:14.539171 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:51:14.556160 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Feb 13 15:51:14.558477 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:51:14.591948 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:51:14.624873 dracut-cmdline[211]: dracut-dracut-053
Feb 13 15:51:14.631755 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05
Feb 13 15:51:14.693187 systemd-resolved[214]: Positive Trust Anchors:
Feb 13 15:51:14.693204 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:51:14.693336 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:51:14.703239 systemd-resolved[214]: Defaulting to hostname 'linux'.
Feb 13 15:51:14.706817 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:51:14.709252 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:51:14.801696 kernel: SCSI subsystem initialized
Feb 13 15:51:14.811686 kernel: Loading iSCSI transport class v2.0-870.
Feb 13 15:51:14.827682 kernel: iscsi: registered transport (tcp)
Feb 13 15:51:14.854679 kernel: iscsi: registered transport (qla4xxx)
Feb 13 15:51:14.854759 kernel: QLogic iSCSI HBA Driver
Feb 13 15:51:14.903621 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:51:14.911894 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Feb 13 15:51:14.951174 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 13 15:51:14.951264 kernel: device-mapper: uevent: version 1.0.3
Feb 13 15:51:14.951292 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Feb 13 15:51:14.995691 kernel: raid6: avx512x4 gen() 13354 MB/s
Feb 13 15:51:15.014703 kernel: raid6: avx512x2 gen()  7870 MB/s
Feb 13 15:51:15.032694 kernel: raid6: avx512x1 gen()  6752 MB/s
Feb 13 15:51:15.051699 kernel: raid6: avx2x4   gen()  2299 MB/s
Feb 13 15:51:15.070704 kernel: raid6: avx2x2   gen()  2389 MB/s
Feb 13 15:51:15.088184 kernel: raid6: avx2x1   gen() 11153 MB/s
Feb 13 15:51:15.088268 kernel: raid6: using algorithm avx512x4 gen() 13354 MB/s
Feb 13 15:51:15.113108 kernel: raid6: .... xor() 5463 MB/s, rmw enabled
Feb 13 15:51:15.113599 kernel: raid6: using avx512x2 recovery algorithm
Feb 13 15:51:15.166371 kernel: xor: automatically using best checksumming function   avx       
Feb 13 15:51:15.467685 kernel: Btrfs loaded, zoned=no, fsverity=no
Feb 13 15:51:15.480815 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:51:15.490996 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:51:15.514609 systemd-udevd[397]: Using default interface naming scheme 'v255'.
Feb 13 15:51:15.521550 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:51:15.533199 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Feb 13 15:51:15.567715 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation
Feb 13 15:51:15.638846 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:51:15.648228 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:51:15.824378 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:51:15.839033 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Feb 13 15:51:15.900289 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:51:15.905928 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:51:15.910150 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:51:15.911873 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:51:15.935914 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Feb 13 15:51:15.949369 kernel: ena 0000:00:05.0: ENA device version: 0.10
Feb 13 15:51:15.975287 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1
Feb 13 15:51:15.975498 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
Feb 13 15:51:15.977374 kernel: cryptd: max_cpu_qlen set to 1000
Feb 13 15:51:15.977412 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:3d:b2:ff:fb:55
Feb 13 15:51:15.970450 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:51:16.007689 kernel: AVX2 version of gcm_enc/dec engaged.
Feb 13 15:51:16.007760 kernel: AES CTR mode by8 optimization enabled
Feb 13 15:51:16.014217 (udev-worker)[455]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:51:16.076425 kernel: nvme nvme0: pci function 0000:00:04.0
Feb 13 15:51:16.076704 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb 13 15:51:16.078893 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:51:16.079042 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:51:16.086576 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:51:16.092421 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:51:16.094381 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:51:16.100368 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:51:16.107725 kernel: nvme nvme0: 2/0/0 default/read/poll queues
Feb 13 15:51:16.113091 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb 13 15:51:16.113194 kernel: GPT:9289727 != 16777215
Feb 13 15:51:16.113219 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb 13 15:51:16.118986 kernel: GPT:9289727 != 16777215
Feb 13 15:51:16.119326 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb 13 15:51:16.119357 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 15:51:16.122081 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:51:16.250696 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (452)
Feb 13 15:51:16.357684 kernel: BTRFS: device fsid 0e178e67-0100-48b1-87c9-422b9a68652a devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (458)
Feb 13 15:51:16.490409 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM.
Feb 13 15:51:16.502946 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:51:16.525564 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM.
Feb 13 15:51:16.556467 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT.
Feb 13 15:51:16.565318 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A.
Feb 13 15:51:16.565535 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A.
Feb 13 15:51:16.594418 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Feb 13 15:51:16.614338 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:51:16.625165 disk-uuid[622]: Primary Header is updated.
Feb 13 15:51:16.625165 disk-uuid[622]: Secondary Entries is updated.
Feb 13 15:51:16.625165 disk-uuid[622]: Secondary Header is updated.
Feb 13 15:51:16.635710 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 15:51:16.642683 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 15:51:16.649381 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:51:17.647742 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 15:51:17.649678 disk-uuid[623]: The operation has completed successfully.
Feb 13 15:51:17.813849 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 13 15:51:17.814039 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Feb 13 15:51:17.866898 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Feb 13 15:51:17.893055 sh[889]: Success
Feb 13 15:51:17.909682 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Feb 13 15:51:18.056582 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Feb 13 15:51:18.074929 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Feb 13 15:51:18.080912 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Feb 13 15:51:18.131737 kernel: BTRFS info (device dm-0): first mount of filesystem 0e178e67-0100-48b1-87c9-422b9a68652a
Feb 13 15:51:18.131861 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:51:18.135293 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Feb 13 15:51:18.135382 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Feb 13 15:51:18.136059 kernel: BTRFS info (device dm-0): using free space tree
Feb 13 15:51:18.229720 kernel: BTRFS info (device dm-0): enabling ssd optimizations
Feb 13 15:51:18.266361 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Feb 13 15:51:18.270754 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Feb 13 15:51:18.286239 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Feb 13 15:51:18.302935 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Feb 13 15:51:18.337947 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475
Feb 13 15:51:18.338030 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:51:18.338055 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 15:51:18.344784 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 15:51:18.367727 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475
Feb 13 15:51:18.367282 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 13 15:51:18.379641 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Feb 13 15:51:18.390371 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Feb 13 15:51:18.463885 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:51:18.479232 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:51:18.518223 systemd-networkd[1081]: lo: Link UP
Feb 13 15:51:18.518237 systemd-networkd[1081]: lo: Gained carrier
Feb 13 15:51:18.523351 systemd-networkd[1081]: Enumeration completed
Feb 13 15:51:18.525052 systemd-networkd[1081]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:51:18.525057 systemd-networkd[1081]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 15:51:18.525343 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:51:18.535877 systemd[1]: Reached target network.target - Network.
Feb 13 15:51:18.543866 systemd-networkd[1081]: eth0: Link UP
Feb 13 15:51:18.543876 systemd-networkd[1081]: eth0: Gained carrier
Feb 13 15:51:18.543895 systemd-networkd[1081]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:51:18.567805 systemd-networkd[1081]: eth0: DHCPv4 address 172.31.28.66/20, gateway 172.31.16.1 acquired from 172.31.16.1
Feb 13 15:51:18.693239 ignition[1010]: Ignition 2.20.0
Feb 13 15:51:18.693313 ignition[1010]: Stage: fetch-offline
Feb 13 15:51:18.693556 ignition[1010]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:51:18.693569 ignition[1010]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:51:18.695506 ignition[1010]: Ignition finished successfully
Feb 13 15:51:18.699245 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:51:18.708929 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Feb 13 15:51:18.728272 ignition[1092]: Ignition 2.20.0
Feb 13 15:51:18.728288 ignition[1092]: Stage: fetch
Feb 13 15:51:18.729383 ignition[1092]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:51:18.729397 ignition[1092]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:51:18.730564 ignition[1092]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:51:18.810246 ignition[1092]: PUT result: OK
Feb 13 15:51:18.833879 ignition[1092]: parsed url from cmdline: ""
Feb 13 15:51:18.833894 ignition[1092]: no config URL provided
Feb 13 15:51:18.833906 ignition[1092]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 15:51:18.833932 ignition[1092]: no config at "/usr/lib/ignition/user.ign"
Feb 13 15:51:18.833962 ignition[1092]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:51:18.847138 ignition[1092]: PUT result: OK
Feb 13 15:51:18.847224 ignition[1092]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1
Feb 13 15:51:18.856290 ignition[1092]: GET result: OK
Feb 13 15:51:18.859842 ignition[1092]: parsing config with SHA512: f23899a73fe0dc31ce8cd5a9a1ad5fe0e281c00f0d7a628276b8dfb53e761ee9e3167255fc28c8ab6d1c5d06361e86413d65747ee4dd1f060574aef13db5ece0
Feb 13 15:51:18.874536 unknown[1092]: fetched base config from "system"
Feb 13 15:51:18.874551 unknown[1092]: fetched base config from "system"
Feb 13 15:51:18.874999 ignition[1092]: fetch: fetch complete
Feb 13 15:51:18.874561 unknown[1092]: fetched user config from "aws"
Feb 13 15:51:18.875008 ignition[1092]: fetch: fetch passed
Feb 13 15:51:18.875078 ignition[1092]: Ignition finished successfully
Feb 13 15:51:18.881266 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Feb 13 15:51:18.889975 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Feb 13 15:51:18.920851 ignition[1098]: Ignition 2.20.0
Feb 13 15:51:18.920868 ignition[1098]: Stage: kargs
Feb 13 15:51:18.921304 ignition[1098]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:51:18.921320 ignition[1098]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:51:18.921460 ignition[1098]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:51:18.927361 ignition[1098]: PUT result: OK
Feb 13 15:51:18.931628 ignition[1098]: kargs: kargs passed
Feb 13 15:51:18.931722 ignition[1098]: Ignition finished successfully
Feb 13 15:51:18.935680 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Feb 13 15:51:18.943915 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Feb 13 15:51:18.987461 ignition[1104]: Ignition 2.20.0
Feb 13 15:51:18.987609 ignition[1104]: Stage: disks
Feb 13 15:51:18.988246 ignition[1104]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:51:18.988263 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:51:18.988645 ignition[1104]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:51:18.991313 ignition[1104]: PUT result: OK
Feb 13 15:51:19.010935 ignition[1104]: disks: disks passed
Feb 13 15:51:19.011101 ignition[1104]: Ignition finished successfully
Feb 13 15:51:19.015843 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Feb 13 15:51:19.016900 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Feb 13 15:51:19.021264 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 15:51:19.024753 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:51:19.032585 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:51:19.047923 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:51:19.072108 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Feb 13 15:51:19.135947 systemd-fsck[1113]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Feb 13 15:51:19.146201 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Feb 13 15:51:19.154372 systemd[1]: Mounting sysroot.mount - /sysroot...
Feb 13 15:51:19.282680 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e45e00fd-a630-4f0f-91bb-bc879e42a47e r/w with ordered data mode. Quota mode: none.
Feb 13 15:51:19.284004 systemd[1]: Mounted sysroot.mount - /sysroot.
Feb 13 15:51:19.285136 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Feb 13 15:51:19.308280 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:51:19.320329 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Feb 13 15:51:19.321068 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Feb 13 15:51:19.321148 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 13 15:51:19.321191 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:51:19.352543 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Feb 13 15:51:19.365056 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Feb 13 15:51:19.371682 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1133)
Feb 13 15:51:19.373677 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475
Feb 13 15:51:19.373751 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:51:19.374864 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 15:51:19.379695 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 15:51:19.380803 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:51:19.569396 initrd-setup-root[1157]: cut: /sysroot/etc/passwd: No such file or directory
Feb 13 15:51:19.600302 initrd-setup-root[1164]: cut: /sysroot/etc/group: No such file or directory
Feb 13 15:51:19.609375 initrd-setup-root[1171]: cut: /sysroot/etc/shadow: No such file or directory
Feb 13 15:51:19.618537 initrd-setup-root[1178]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 13 15:51:19.815015 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Feb 13 15:51:19.831088 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Feb 13 15:51:19.857186 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Feb 13 15:51:19.879881 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475
Feb 13 15:51:19.881627 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Feb 13 15:51:19.935233 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Feb 13 15:51:19.945480 ignition[1245]: INFO     : Ignition 2.20.0
Feb 13 15:51:19.945480 ignition[1245]: INFO     : Stage: mount
Feb 13 15:51:19.949096 ignition[1245]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:51:19.949096 ignition[1245]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:51:19.949096 ignition[1245]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:51:19.968758 ignition[1245]: INFO     : PUT result: OK
Feb 13 15:51:19.974599 ignition[1245]: INFO     : mount: mount passed
Feb 13 15:51:19.975966 ignition[1245]: INFO     : Ignition finished successfully
Feb 13 15:51:19.977963 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Feb 13 15:51:19.988989 systemd[1]: Starting ignition-files.service - Ignition (files)...
Feb 13 15:51:20.057953 systemd-networkd[1081]: eth0: Gained IPv6LL
Feb 13 15:51:20.291926 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:51:20.333216 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1257)
Feb 13 15:51:20.342208 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475
Feb 13 15:51:20.342305 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:51:20.342349 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 15:51:20.359697 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 15:51:20.365340 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:51:20.421327 ignition[1273]: INFO     : Ignition 2.20.0
Feb 13 15:51:20.421327 ignition[1273]: INFO     : Stage: files
Feb 13 15:51:20.424644 ignition[1273]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:51:20.424644 ignition[1273]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:51:20.424644 ignition[1273]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:51:20.433699 ignition[1273]: INFO     : PUT result: OK
Feb 13 15:51:20.454958 ignition[1273]: DEBUG    : files: compiled without relabeling support, skipping
Feb 13 15:51:20.462330 ignition[1273]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 13 15:51:20.462330 ignition[1273]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 13 15:51:20.482294 ignition[1273]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 13 15:51:20.489097 ignition[1273]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 13 15:51:20.491704 ignition[1273]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 13 15:51:20.489961 unknown[1273]: wrote ssh authorized keys file for user: core
Feb 13 15:51:20.513230 ignition[1273]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/home/core/install.sh"
Feb 13 15:51:20.516636 ignition[1273]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh"
Feb 13 15:51:20.516636 ignition[1273]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:51:20.516636 ignition[1273]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:51:20.516636 ignition[1273]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Feb 13 15:51:20.516636 ignition[1273]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Feb 13 15:51:20.516636 ignition[1273]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Feb 13 15:51:20.516636 ignition[1273]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1
Feb 13 15:51:21.003391 ignition[1273]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET result: OK
Feb 13 15:51:21.596159 ignition[1273]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Feb 13 15:51:21.603869 ignition[1273]: INFO     : files: createResultFile: createFiles: op(7): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:51:21.603869 ignition[1273]: INFO     : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:51:21.603869 ignition[1273]: INFO     : files: files passed
Feb 13 15:51:21.603869 ignition[1273]: INFO     : Ignition finished successfully
Feb 13 15:51:21.613502 systemd[1]: Finished ignition-files.service - Ignition (files).
Feb 13 15:51:21.626547 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Feb 13 15:51:21.648633 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Feb 13 15:51:21.657366 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 13 15:51:21.657722 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Feb 13 15:51:21.708924 initrd-setup-root-after-ignition[1303]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:51:21.708924 initrd-setup-root-after-ignition[1303]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:51:21.723551 initrd-setup-root-after-ignition[1307]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:51:21.733508 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:51:21.734776 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Feb 13 15:51:21.755931 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Feb 13 15:51:21.793146 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 13 15:51:21.793284 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Feb 13 15:51:21.800992 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Feb 13 15:51:21.803018 systemd[1]: Reached target initrd.target - Initrd Default Target.
Feb 13 15:51:21.805066 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Feb 13 15:51:21.814625 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Feb 13 15:51:21.852132 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:51:21.866189 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Feb 13 15:51:21.902395 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:51:21.903013 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:51:21.912120 systemd[1]: Stopped target timers.target - Timer Units.
Feb 13 15:51:21.916972 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 13 15:51:21.919509 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:51:21.931396 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Feb 13 15:51:21.931946 systemd[1]: Stopped target basic.target - Basic System.
Feb 13 15:51:21.934736 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Feb 13 15:51:21.935298 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:51:21.936163 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Feb 13 15:51:21.936617 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Feb 13 15:51:21.937034 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:51:21.937259 systemd[1]: Stopped target sysinit.target - System Initialization.
Feb 13 15:51:21.937449 systemd[1]: Stopped target local-fs.target - Local File Systems.
Feb 13 15:51:21.937876 systemd[1]: Stopped target swap.target - Swaps.
Feb 13 15:51:21.938214 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 13 15:51:21.938701 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:51:21.939718 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:51:21.939946 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:51:21.940276 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Feb 13 15:51:21.952362 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:51:21.955987 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 13 15:51:21.956151 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:51:21.964480 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 13 15:51:21.964714 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:51:22.007440 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 13 15:51:22.010861 systemd[1]: Stopped ignition-files.service - Ignition (files).
Feb 13 15:51:22.035466 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Feb 13 15:51:22.051078 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Feb 13 15:51:22.066873 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 13 15:51:22.067079 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:51:22.074146 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 13 15:51:22.074349 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:51:22.096484 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 13 15:51:22.096625 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Feb 13 15:51:22.126310 ignition[1327]: INFO     : Ignition 2.20.0
Feb 13 15:51:22.126310 ignition[1327]: INFO     : Stage: umount
Feb 13 15:51:22.126310 ignition[1327]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:51:22.126310 ignition[1327]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:51:22.126310 ignition[1327]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:51:22.144009 ignition[1327]: INFO     : PUT result: OK
Feb 13 15:51:22.154440 ignition[1327]: INFO     : umount: umount passed
Feb 13 15:51:22.154440 ignition[1327]: INFO     : Ignition finished successfully
Feb 13 15:51:22.166413 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb 13 15:51:22.168825 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 13 15:51:22.168969 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Feb 13 15:51:22.185779 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 13 15:51:22.186172 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Feb 13 15:51:22.202854 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 13 15:51:22.203125 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Feb 13 15:51:22.209140 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 13 15:51:22.209243 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Feb 13 15:51:22.215648 systemd[1]: ignition-fetch.service: Deactivated successfully.
Feb 13 15:51:22.215748 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Feb 13 15:51:22.219426 systemd[1]: Stopped target network.target - Network.
Feb 13 15:51:22.221423 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 13 15:51:22.221617 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:51:22.225093 systemd[1]: Stopped target paths.target - Path Units.
Feb 13 15:51:22.227271 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 13 15:51:22.228897 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:51:22.232560 systemd[1]: Stopped target slices.target - Slice Units.
Feb 13 15:51:22.238368 systemd[1]: Stopped target sockets.target - Socket Units.
Feb 13 15:51:22.242841 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 13 15:51:22.242895 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:51:22.246213 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 13 15:51:22.246335 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:51:22.253707 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 13 15:51:22.253811 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Feb 13 15:51:22.259050 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Feb 13 15:51:22.259125 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Feb 13 15:51:22.262051 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 13 15:51:22.262165 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Feb 13 15:51:22.267835 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Feb 13 15:51:22.271165 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Feb 13 15:51:22.299758 systemd-networkd[1081]: eth0: DHCPv6 lease lost
Feb 13 15:51:22.304299 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 13 15:51:22.304646 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Feb 13 15:51:22.316444 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 13 15:51:22.316731 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Feb 13 15:51:22.320596 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 13 15:51:22.320963 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:51:22.341524 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Feb 13 15:51:22.343048 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 13 15:51:22.343133 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:51:22.358562 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 15:51:22.358683 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:51:22.371846 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 13 15:51:22.371948 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:51:22.376964 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 13 15:51:22.377038 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:51:22.386597 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:51:22.426095 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 13 15:51:22.428691 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:51:22.438090 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 13 15:51:22.438167 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:51:22.448234 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 13 15:51:22.448284 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:51:22.451355 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 13 15:51:22.451445 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:51:22.458597 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 13 15:51:22.458728 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:51:22.474943 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:51:22.475013 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:51:22.491860 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Feb 13 15:51:22.493800 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 13 15:51:22.493875 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:51:22.496400 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully.
Feb 13 15:51:22.496463 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 15:51:22.499306 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 13 15:51:22.499371 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:51:22.504364 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:51:22.504467 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:51:22.509734 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 13 15:51:22.509830 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Feb 13 15:51:22.514983 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 13 15:51:22.515178 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Feb 13 15:51:22.532688 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Feb 13 15:51:22.565646 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Feb 13 15:51:22.609550 systemd[1]: Switching root.
Feb 13 15:51:22.643148 systemd-journald[179]: Journal stopped
Feb 13 15:51:25.304611 systemd-journald[179]: Received SIGTERM from PID 1 (systemd).
Feb 13 15:51:25.304752 kernel: SELinux:  policy capability network_peer_controls=1
Feb 13 15:51:25.304792 kernel: SELinux:  policy capability open_perms=1
Feb 13 15:51:25.304815 kernel: SELinux:  policy capability extended_socket_class=1
Feb 13 15:51:25.304839 kernel: SELinux:  policy capability always_check_network=0
Feb 13 15:51:25.304874 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 13 15:51:25.304898 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 13 15:51:25.304922 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb 13 15:51:25.304943 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb 13 15:51:25.304965 kernel: audit: type=1403 audit(1739461883.536:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 13 15:51:25.304989 systemd[1]: Successfully loaded SELinux policy in 62.400ms.
Feb 13 15:51:25.305028 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.461ms.
Feb 13 15:51:25.305054 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 15:51:25.305078 systemd[1]: Detected virtualization amazon.
Feb 13 15:51:25.305106 systemd[1]: Detected architecture x86-64.
Feb 13 15:51:25.305136 systemd[1]: Detected first boot.
Feb 13 15:51:25.305168 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 15:51:25.305195 zram_generator::config[1370]: No configuration found.
Feb 13 15:51:25.305221 systemd[1]: Populated /etc with preset unit settings.
Feb 13 15:51:25.305248 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 13 15:51:25.305272 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Feb 13 15:51:25.305297 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 13 15:51:25.305323 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Feb 13 15:51:25.305348 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Feb 13 15:51:25.305374 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Feb 13 15:51:25.305397 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Feb 13 15:51:25.305421 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Feb 13 15:51:25.305450 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Feb 13 15:51:25.305475 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Feb 13 15:51:25.305499 systemd[1]: Created slice user.slice - User and Session Slice.
Feb 13 15:51:25.305523 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:51:25.305546 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:51:25.305570 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Feb 13 15:51:25.305595 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Feb 13 15:51:25.305618 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Feb 13 15:51:25.313731 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:51:25.313796 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0...
Feb 13 15:51:25.313820 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:51:25.313844 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Feb 13 15:51:25.313881 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Feb 13 15:51:25.313906 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Feb 13 15:51:25.313928 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Feb 13 15:51:25.313951 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:51:25.313976 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:51:25.314003 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:51:25.314028 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:51:25.314053 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Feb 13 15:51:25.314078 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Feb 13 15:51:25.314103 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:51:25.314127 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:51:25.314150 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:51:25.314175 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Feb 13 15:51:25.314199 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Feb 13 15:51:25.314228 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Feb 13 15:51:25.314252 systemd[1]: Mounting media.mount - External Media Directory...
Feb 13 15:51:25.314276 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:51:25.314301 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Feb 13 15:51:25.314325 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Feb 13 15:51:25.314351 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Feb 13 15:51:25.314377 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 13 15:51:25.314403 systemd[1]: Reached target machines.target - Containers.
Feb 13 15:51:25.314425 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Feb 13 15:51:25.314455 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:51:25.314480 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:51:25.314505 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Feb 13 15:51:25.314529 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:51:25.314822 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 15:51:25.314854 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:51:25.314878 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Feb 13 15:51:25.314903 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:51:25.314936 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb 13 15:51:25.314961 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 13 15:51:25.314986 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Feb 13 15:51:25.315010 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Feb 13 15:51:25.315035 systemd[1]: Stopped systemd-fsck-usr.service.
Feb 13 15:51:25.315066 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:51:25.315092 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:51:25.315116 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Feb 13 15:51:25.315138 kernel: loop: module loaded
Feb 13 15:51:25.315169 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Feb 13 15:51:25.315192 kernel: fuse: init (API version 7.39)
Feb 13 15:51:25.315217 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:51:25.315241 systemd[1]: verity-setup.service: Deactivated successfully.
Feb 13 15:51:25.315267 systemd[1]: Stopped verity-setup.service.
Feb 13 15:51:25.315303 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:51:25.315326 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Feb 13 15:51:25.315350 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Feb 13 15:51:25.315373 systemd[1]: Mounted media.mount - External Media Directory.
Feb 13 15:51:25.315402 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Feb 13 15:51:25.315426 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Feb 13 15:51:25.315449 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Feb 13 15:51:25.315475 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:51:25.315499 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 13 15:51:25.315535 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Feb 13 15:51:25.315561 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:51:25.315586 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:51:25.315610 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:51:25.315633 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:51:25.315673 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 13 15:51:25.315700 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Feb 13 15:51:25.315727 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:51:25.315752 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:51:25.315783 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:51:25.315857 systemd-journald[1452]: Collecting audit messages is disabled.
Feb 13 15:51:25.315904 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Feb 13 15:51:25.315934 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Feb 13 15:51:25.315958 systemd[1]: Reached target network-pre.target - Preparation for Network.
Feb 13 15:51:25.316138 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Feb 13 15:51:25.316167 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Feb 13 15:51:25.316193 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb 13 15:51:25.316217 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:51:25.316242 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Feb 13 15:51:25.316268 systemd-journald[1452]: Journal started
Feb 13 15:51:25.316321 systemd-journald[1452]: Runtime Journal (/run/log/journal/ec220ea845babbffa5e4a708c3954f30) is 4.8M, max 38.5M, 33.7M free.
Feb 13 15:51:24.569806 systemd[1]: Queued start job for default target multi-user.target.
Feb 13 15:51:24.606418 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6.
Feb 13 15:51:24.607016 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 13 15:51:25.325675 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Feb 13 15:51:25.359697 kernel: ACPI: bus type drm_connector registered
Feb 13 15:51:25.367947 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Feb 13 15:51:25.368052 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:51:25.382886 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Feb 13 15:51:25.382977 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:51:25.392710 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Feb 13 15:51:25.395696 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:51:25.402451 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:51:25.409868 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Feb 13 15:51:25.428009 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 15:51:25.446821 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:51:25.457768 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Feb 13 15:51:25.464175 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 15:51:25.464388 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 15:51:25.466682 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:51:25.471886 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Feb 13 15:51:25.474019 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Feb 13 15:51:25.477030 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Feb 13 15:51:25.481392 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Feb 13 15:51:25.518704 kernel: loop0: detected capacity change from 0 to 211296
Feb 13 15:51:25.534226 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Feb 13 15:51:25.551795 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Feb 13 15:51:25.564898 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Feb 13 15:51:25.577132 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Feb 13 15:51:25.581235 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:51:25.601907 systemd-tmpfiles[1479]: ACLs are not supported, ignoring.
Feb 13 15:51:25.601935 systemd-tmpfiles[1479]: ACLs are not supported, ignoring.
Feb 13 15:51:25.622986 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Feb 13 15:51:25.629690 systemd-journald[1452]: Time spent on flushing to /var/log/journal/ec220ea845babbffa5e4a708c3954f30 is 36.503ms for 955 entries.
Feb 13 15:51:25.629690 systemd-journald[1452]: System Journal (/var/log/journal/ec220ea845babbffa5e4a708c3954f30) is 8.0M, max 195.6M, 187.6M free.
Feb 13 15:51:25.684991 systemd-journald[1452]: Received client request to flush runtime journal.
Feb 13 15:51:25.685061 kernel: loop1: detected capacity change from 0 to 141000
Feb 13 15:51:25.637796 udevadm[1507]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Feb 13 15:51:25.646563 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 15:51:25.665986 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Feb 13 15:51:25.688003 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Feb 13 15:51:25.753411 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb 13 15:51:25.756005 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Feb 13 15:51:25.775198 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Feb 13 15:51:25.795356 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:51:25.812700 kernel: loop2: detected capacity change from 0 to 62848
Feb 13 15:51:25.856883 systemd-tmpfiles[1520]: ACLs are not supported, ignoring.
Feb 13 15:51:25.857430 systemd-tmpfiles[1520]: ACLs are not supported, ignoring.
Feb 13 15:51:25.867575 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:51:25.950098 kernel: loop3: detected capacity change from 0 to 138184
Feb 13 15:51:26.085694 kernel: loop4: detected capacity change from 0 to 211296
Feb 13 15:51:26.106686 kernel: loop5: detected capacity change from 0 to 141000
Feb 13 15:51:26.135682 kernel: loop6: detected capacity change from 0 to 62848
Feb 13 15:51:26.163678 kernel: loop7: detected capacity change from 0 to 138184
Feb 13 15:51:26.212287 (sd-merge)[1525]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'.
Feb 13 15:51:26.215721 (sd-merge)[1525]: Merged extensions into '/usr'.
Feb 13 15:51:26.225893 systemd[1]: Reloading requested from client PID 1478 ('systemd-sysext') (unit systemd-sysext.service)...
Feb 13 15:51:26.226077 systemd[1]: Reloading...
Feb 13 15:51:26.459680 zram_generator::config[1551]: No configuration found.
Feb 13 15:51:26.705055 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:51:26.829442 systemd[1]: Reloading finished in 602 ms.
Feb 13 15:51:26.868091 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Feb 13 15:51:26.882010 systemd[1]: Starting ensure-sysext.service...
Feb 13 15:51:26.891675 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:51:26.913718 systemd[1]: Reloading requested from client PID 1599 ('systemctl') (unit ensure-sysext.service)...
Feb 13 15:51:26.913737 systemd[1]: Reloading...
Feb 13 15:51:26.978585 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb 13 15:51:26.980797 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Feb 13 15:51:26.983225 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb 13 15:51:26.987129 systemd-tmpfiles[1600]: ACLs are not supported, ignoring.
Feb 13 15:51:26.987438 systemd-tmpfiles[1600]: ACLs are not supported, ignoring.
Feb 13 15:51:26.998679 zram_generator::config[1624]: No configuration found.
Feb 13 15:51:27.010067 systemd-tmpfiles[1600]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 15:51:27.010085 systemd-tmpfiles[1600]: Skipping /boot
Feb 13 15:51:27.027492 ldconfig[1474]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb 13 15:51:27.039405 systemd-tmpfiles[1600]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 15:51:27.039428 systemd-tmpfiles[1600]: Skipping /boot
Feb 13 15:51:27.235592 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:51:27.348724 systemd[1]: Reloading finished in 434 ms.
Feb 13 15:51:27.366916 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Feb 13 15:51:27.369182 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Feb 13 15:51:27.376377 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:51:27.411932 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:51:27.434922 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Feb 13 15:51:27.446333 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Feb 13 15:51:27.468928 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:51:27.485702 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:51:27.495926 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Feb 13 15:51:27.518514 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Feb 13 15:51:27.524921 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:51:27.525720 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:51:27.541054 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:51:27.547045 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:51:27.562538 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:51:27.564753 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:51:27.564960 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:51:27.567965 systemd-udevd[1691]: Using default interface naming scheme 'v255'.
Feb 13 15:51:27.573077 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:51:27.573570 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:51:27.574244 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:51:27.574992 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:51:27.584048 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:51:27.585417 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:51:27.599835 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 15:51:27.602090 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:51:27.602421 systemd[1]: Reached target time-set.target - System Time Set.
Feb 13 15:51:27.604166 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:51:27.622442 systemd[1]: Finished ensure-sysext.service.
Feb 13 15:51:27.636291 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Feb 13 15:51:27.673523 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Feb 13 15:51:27.683971 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Feb 13 15:51:27.696097 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:51:27.696306 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:51:27.718458 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 15:51:27.720721 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 15:51:27.729772 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Feb 13 15:51:27.733270 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 13 15:51:27.741792 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:51:27.742052 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:51:27.747393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:51:27.747745 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:51:27.754706 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:51:27.754833 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:51:27.763642 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Feb 13 15:51:27.768458 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:51:27.783698 augenrules[1723]: No rules
Feb 13 15:51:27.792218 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:51:27.794540 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:51:27.797124 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:51:27.802163 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Feb 13 15:51:27.960809 (udev-worker)[1738]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:51:28.010260 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped.
Feb 13 15:51:28.038497 systemd-networkd[1728]: lo: Link UP
Feb 13 15:51:28.038508 systemd-networkd[1728]: lo: Gained carrier
Feb 13 15:51:28.043055 systemd-resolved[1689]: Positive Trust Anchors:
Feb 13 15:51:28.044413 systemd-networkd[1728]: Enumeration completed
Feb 13 15:51:28.044891 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:51:28.048934 systemd-networkd[1728]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:51:28.049089 systemd-networkd[1728]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 15:51:28.049969 systemd-resolved[1689]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:51:28.050053 systemd-resolved[1689]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:51:28.054932 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Feb 13 15:51:28.057744 systemd-networkd[1728]: eth0: Link UP
Feb 13 15:51:28.058144 systemd-networkd[1728]: eth0: Gained carrier
Feb 13 15:51:28.058827 systemd-networkd[1728]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:51:28.064916 systemd-resolved[1689]: Defaulting to hostname 'linux'.
Feb 13 15:51:28.069988 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:51:28.072588 systemd[1]: Reached target network.target - Network.
Feb 13 15:51:28.074045 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:51:28.075909 systemd-networkd[1728]: eth0: DHCPv4 address 172.31.28.66/20, gateway 172.31.16.1 acquired from 172.31.16.1
Feb 13 15:51:28.099454 systemd-networkd[1728]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:51:28.119717 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3
Feb 13 15:51:28.127683 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255
Feb 13 15:51:28.146740 kernel: ACPI: button: Power Button [PWRF]
Feb 13 15:51:28.148697 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4
Feb 13 15:51:28.151687 kernel: ACPI: button: Sleep Button [SLPF]
Feb 13 15:51:28.159753 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5
Feb 13 15:51:28.161701 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1739)
Feb 13 15:51:28.276678 kernel: mousedev: PS/2 mouse device common for all mice
Feb 13 15:51:28.304077 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:51:28.362702 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM.
Feb 13 15:51:28.364817 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Feb 13 15:51:28.372434 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Feb 13 15:51:28.376845 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Feb 13 15:51:28.409767 lvm[1847]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 15:51:28.420195 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Feb 13 15:51:28.449556 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Feb 13 15:51:28.634708 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:51:28.647955 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Feb 13 15:51:28.652969 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:51:28.657418 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:51:28.657894 lvm[1854]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 15:51:28.660320 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Feb 13 15:51:28.662673 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Feb 13 15:51:28.665128 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Feb 13 15:51:28.667723 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Feb 13 15:51:28.669982 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Feb 13 15:51:28.672170 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb 13 15:51:28.672300 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:51:28.674136 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:51:28.678223 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Feb 13 15:51:28.682453 systemd[1]: Starting docker.socket - Docker Socket for the API...
Feb 13 15:51:28.695769 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Feb 13 15:51:28.698624 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Feb 13 15:51:28.701990 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Feb 13 15:51:28.714416 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:51:28.722090 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:51:28.724672 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Feb 13 15:51:28.724718 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Feb 13 15:51:28.734858 systemd[1]: Starting containerd.service - containerd container runtime...
Feb 13 15:51:28.739727 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Feb 13 15:51:28.762291 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Feb 13 15:51:28.804835 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Feb 13 15:51:28.810100 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Feb 13 15:51:28.812559 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Feb 13 15:51:28.819918 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Feb 13 15:51:28.831179 systemd[1]: Started ntpd.service - Network Time Service.
Feb 13 15:51:28.839838 systemd[1]: Starting setup-oem.service - Setup OEM...
Feb 13 15:51:28.846039 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Feb 13 15:51:28.851069 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Feb 13 15:51:28.857134 systemd[1]: Starting systemd-logind.service - User Login Management...
Feb 13 15:51:28.859746 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb 13 15:51:28.860432 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Feb 13 15:51:28.866073 systemd[1]: Starting update-engine.service - Update Engine...
Feb 13 15:51:28.875823 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Feb 13 15:51:28.975148 jq[1873]: true
Feb 13 15:51:28.981030 jq[1862]: false
Feb 13 15:51:28.994284 dbus-daemon[1861]: [system] SELinux support is enabled
Feb 13 15:51:28.997842 dbus-daemon[1861]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1728 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0")
Feb 13 15:51:29.006735 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Feb 13 15:51:29.017820 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb 13 15:51:29.025895 extend-filesystems[1863]: Found loop4
Feb 13 15:51:29.025895 extend-filesystems[1863]: Found loop5
Feb 13 15:51:29.025895 extend-filesystems[1863]: Found loop6
Feb 13 15:51:29.025895 extend-filesystems[1863]: Found loop7
Feb 13 15:51:29.025895 extend-filesystems[1863]: Found nvme0n1
Feb 13 15:51:29.025895 extend-filesystems[1863]: Found nvme0n1p1
Feb 13 15:51:29.025895 extend-filesystems[1863]: Found nvme0n1p2
Feb 13 15:51:29.025895 extend-filesystems[1863]: Found nvme0n1p3
Feb 13 15:51:29.025895 extend-filesystems[1863]: Found usr
Feb 13 15:51:29.025895 extend-filesystems[1863]: Found nvme0n1p4
Feb 13 15:51:29.025895 extend-filesystems[1863]: Found nvme0n1p6
Feb 13 15:51:29.025895 extend-filesystems[1863]: Found nvme0n1p7
Feb 13 15:51:29.025895 extend-filesystems[1863]: Found nvme0n1p9
Feb 13 15:51:29.025895 extend-filesystems[1863]: Checking size of /dev/nvme0n1p9
Feb 13 15:51:29.018211 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Feb 13 15:51:29.066679 update_engine[1871]: I20250213 15:51:29.039822  1871 main.cc:92] Flatcar Update Engine starting
Feb 13 15:51:29.066679 update_engine[1871]: I20250213 15:51:29.041837  1871 update_check_scheduler.cc:74] Next update check in 4m12s
Feb 13 15:51:29.037816 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb 13 15:51:29.039823 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Feb 13 15:51:29.057110 (ntainerd)[1886]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Feb 13 15:51:29.075632 ntpd[1865]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:33:53 UTC 2025 (1): Starting
Feb 13 15:51:29.086822 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb 13 15:51:29.089419 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:33:53 UTC 2025 (1): Starting
Feb 13 15:51:29.089419 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Feb 13 15:51:29.089419 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: ----------------------------------------------------
Feb 13 15:51:29.089419 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: ntp-4 is maintained by Network Time Foundation,
Feb 13 15:51:29.089419 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Feb 13 15:51:29.089419 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: corporation.  Support and training for ntp-4 are
Feb 13 15:51:29.089419 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: available at https://www.nwtime.org/support
Feb 13 15:51:29.089419 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: ----------------------------------------------------
Feb 13 15:51:29.089419 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: proto: precision = 0.097 usec (-23)
Feb 13 15:51:29.089419 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: basedate set to 2025-02-01
Feb 13 15:51:29.089419 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: gps base set to 2025-02-02 (week 2352)
Feb 13 15:51:29.075691 ntpd[1865]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Feb 13 15:51:29.086868 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Feb 13 15:51:29.075702 ntpd[1865]: ----------------------------------------------------
Feb 13 15:51:29.075712 ntpd[1865]: ntp-4 is maintained by Network Time Foundation,
Feb 13 15:51:29.075723 ntpd[1865]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Feb 13 15:51:29.075732 ntpd[1865]: corporation.  Support and training for ntp-4 are
Feb 13 15:51:29.075742 ntpd[1865]: available at https://www.nwtime.org/support
Feb 13 15:51:29.075752 ntpd[1865]: ----------------------------------------------------
Feb 13 15:51:29.091014 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb 13 15:51:29.107516 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: Listen and drop on 0 v6wildcard [::]:123
Feb 13 15:51:29.107516 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Feb 13 15:51:29.107516 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: Listen normally on 2 lo 127.0.0.1:123
Feb 13 15:51:29.107516 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: Listen normally on 3 eth0 172.31.28.66:123
Feb 13 15:51:29.107516 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: Listen normally on 4 lo [::1]:123
Feb 13 15:51:29.107516 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: bind(21) AF_INET6 fe80::43d:b2ff:feff:fb55%2#123 flags 0x11 failed: Cannot assign requested address
Feb 13 15:51:29.107516 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: unable to create socket on eth0 (5) for fe80::43d:b2ff:feff:fb55%2#123
Feb 13 15:51:29.107516 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: failed to init interface for address fe80::43d:b2ff:feff:fb55%2
Feb 13 15:51:29.107516 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: Listening on routing socket on fd #21 for interface updates
Feb 13 15:51:29.081043 ntpd[1865]: proto: precision = 0.097 usec (-23)
Feb 13 15:51:29.091045 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Feb 13 15:51:29.086928 ntpd[1865]: basedate set to 2025-02-01
Feb 13 15:51:29.099982 systemd[1]: Started update-engine.service - Update Engine.
Feb 13 15:51:29.087107 ntpd[1865]: gps base set to 2025-02-02 (week 2352)
Feb 13 15:51:29.098790 ntpd[1865]: Listen and drop on 0 v6wildcard [::]:123
Feb 13 15:51:29.098901 ntpd[1865]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Feb 13 15:51:29.099957 ntpd[1865]: Listen normally on 2 lo 127.0.0.1:123
Feb 13 15:51:29.100008 ntpd[1865]: Listen normally on 3 eth0 172.31.28.66:123
Feb 13 15:51:29.100099 ntpd[1865]: Listen normally on 4 lo [::1]:123
Feb 13 15:51:29.100151 ntpd[1865]: bind(21) AF_INET6 fe80::43d:b2ff:feff:fb55%2#123 flags 0x11 failed: Cannot assign requested address
Feb 13 15:51:29.100175 ntpd[1865]: unable to create socket on eth0 (5) for fe80::43d:b2ff:feff:fb55%2#123
Feb 13 15:51:29.100264 ntpd[1865]: failed to init interface for address fe80::43d:b2ff:feff:fb55%2
Feb 13 15:51:29.100314 ntpd[1865]: Listening on routing socket on fd #21 for interface updates
Feb 13 15:51:29.109956 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Feb 13 15:51:29.112960 dbus-daemon[1861]: [system] Successfully activated service 'org.freedesktop.systemd1'
Feb 13 15:51:29.113447 systemd[1]: motdgen.service: Deactivated successfully.
Feb 13 15:51:29.114643 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Feb 13 15:51:29.123112 ntpd[1865]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 15:51:29.123157 ntpd[1865]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 15:51:29.123287 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 15:51:29.123287 ntpd[1865]: 13 Feb 15:51:29 ntpd[1865]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 15:51:29.128467 jq[1884]: true
Feb 13 15:51:29.156716 extend-filesystems[1863]: Resized partition /dev/nvme0n1p9
Feb 13 15:51:29.168789 systemd[1]: Starting systemd-hostnamed.service - Hostname Service...
Feb 13 15:51:29.184023 extend-filesystems[1906]: resize2fs 1.47.1 (20-May-2024)
Feb 13 15:51:29.215445 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks
Feb 13 15:51:29.267589 systemd[1]: Finished setup-oem.service - Setup OEM.
Feb 13 15:51:29.342011 systemd-networkd[1728]: eth0: Gained IPv6LL
Feb 13 15:51:29.384718 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915
Feb 13 15:51:29.397797 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Feb 13 15:51:29.413178 systemd[1]: Reached target network-online.target - Network is Online.
Feb 13 15:51:29.464327 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent.
Feb 13 15:51:29.478693 extend-filesystems[1906]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required
Feb 13 15:51:29.478693 extend-filesystems[1906]: old_desc_blocks = 1, new_desc_blocks = 1
Feb 13 15:51:29.478693 extend-filesystems[1906]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long.
Feb 13 15:51:29.519599 extend-filesystems[1863]: Resized filesystem in /dev/nvme0n1p9
Feb 13 15:51:29.523545 coreos-metadata[1860]: Feb 13 15:51:29.486 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Feb 13 15:51:29.523545 coreos-metadata[1860]: Feb 13 15:51:29.522 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1
Feb 13 15:51:29.479884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:51:29.486373 systemd-logind[1870]: Watching system buttons on /dev/input/event1 (Power Button)
Feb 13 15:51:29.486399 systemd-logind[1870]: Watching system buttons on /dev/input/event2 (Sleep Button)
Feb 13 15:51:29.486423 systemd-logind[1870]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Feb 13 15:51:29.562026 bash[1932]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 15:51:29.488175 systemd-logind[1870]: New seat seat0.
Feb 13 15:51:29.522251 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Feb 13 15:51:29.525887 systemd[1]: Started systemd-logind.service - User Login Management.
Feb 13 15:51:29.528027 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb 13 15:51:29.528361 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Feb 13 15:51:29.593739 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1722)
Feb 13 15:51:29.610805 coreos-metadata[1860]: Feb 13 15:51:29.599 INFO Fetch successful
Feb 13 15:51:29.610805 coreos-metadata[1860]: Feb 13 15:51:29.604 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1
Feb 13 15:51:29.600230 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Feb 13 15:51:29.674344 coreos-metadata[1860]: Feb 13 15:51:29.665 INFO Fetch successful
Feb 13 15:51:29.674344 coreos-metadata[1860]: Feb 13 15:51:29.665 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1
Feb 13 15:51:29.643828 systemd[1]: Starting sshkeys.service...
Feb 13 15:51:29.683007 coreos-metadata[1860]: Feb 13 15:51:29.681 INFO Fetch successful
Feb 13 15:51:29.683007 coreos-metadata[1860]: Feb 13 15:51:29.681 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1
Feb 13 15:51:29.697512 coreos-metadata[1860]: Feb 13 15:51:29.697 INFO Fetch successful
Feb 13 15:51:29.697512 coreos-metadata[1860]: Feb 13 15:51:29.697 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1
Feb 13 15:51:29.702309 coreos-metadata[1860]: Feb 13 15:51:29.701 INFO Fetch failed with 404: resource not found
Feb 13 15:51:29.702309 coreos-metadata[1860]: Feb 13 15:51:29.701 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1
Feb 13 15:51:29.720837 coreos-metadata[1860]: Feb 13 15:51:29.714 INFO Fetch successful
Feb 13 15:51:29.720837 coreos-metadata[1860]: Feb 13 15:51:29.714 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1
Feb 13 15:51:29.720837 coreos-metadata[1860]: Feb 13 15:51:29.719 INFO Fetch successful
Feb 13 15:51:29.720837 coreos-metadata[1860]: Feb 13 15:51:29.719 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1
Feb 13 15:51:29.727037 coreos-metadata[1860]: Feb 13 15:51:29.724 INFO Fetch successful
Feb 13 15:51:29.727037 coreos-metadata[1860]: Feb 13 15:51:29.724 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1
Feb 13 15:51:29.740242 coreos-metadata[1860]: Feb 13 15:51:29.731 INFO Fetch successful
Feb 13 15:51:29.740242 coreos-metadata[1860]: Feb 13 15:51:29.731 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1
Feb 13 15:51:29.740242 coreos-metadata[1860]: Feb 13 15:51:29.736 INFO Fetch successful
Feb 13 15:51:29.771984 locksmithd[1896]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb 13 15:51:29.804361 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Feb 13 15:51:29.814956 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Feb 13 15:51:29.845817 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys.
Feb 13 15:51:29.860156 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)...
Feb 13 15:51:29.872334 dbus-daemon[1861]: [system] Successfully activated service 'org.freedesktop.hostname1'
Feb 13 15:51:29.875239 systemd[1]: Started systemd-hostnamed.service - Hostname Service.
Feb 13 15:51:29.880544 dbus-daemon[1861]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1905 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0")
Feb 13 15:51:29.896251 systemd[1]: Starting polkit.service - Authorization Manager...
Feb 13 15:51:29.925632 amazon-ssm-agent[1934]: Initializing new seelog logger
Feb 13 15:51:29.925632 amazon-ssm-agent[1934]: New Seelog Logger Creation Complete
Feb 13 15:51:29.925632 amazon-ssm-agent[1934]: 2025/02/13 15:51:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:51:29.925632 amazon-ssm-agent[1934]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:51:29.925632 amazon-ssm-agent[1934]: 2025/02/13 15:51:29 processing appconfig overrides
Feb 13 15:51:29.925632 amazon-ssm-agent[1934]: 2025/02/13 15:51:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:51:29.925632 amazon-ssm-agent[1934]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:51:29.925632 amazon-ssm-agent[1934]: 2025/02/13 15:51:29 processing appconfig overrides
Feb 13 15:51:29.925632 amazon-ssm-agent[1934]: 2025-02-13 15:51:29 INFO Proxy environment variables:
Feb 13 15:51:29.938908 amazon-ssm-agent[1934]: 2025/02/13 15:51:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:51:29.938908 amazon-ssm-agent[1934]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:51:29.938908 amazon-ssm-agent[1934]: 2025/02/13 15:51:29 processing appconfig overrides
Feb 13 15:51:30.006786 amazon-ssm-agent[1934]: 2025/02/13 15:51:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:51:30.006786 amazon-ssm-agent[1934]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:51:30.006786 amazon-ssm-agent[1934]: 2025/02/13 15:51:29 processing appconfig overrides
Feb 13 15:51:30.019180 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Feb 13 15:51:30.022855 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Feb 13 15:51:30.043881 amazon-ssm-agent[1934]: 2025-02-13 15:51:29 INFO https_proxy:
Feb 13 15:51:30.081854 sshd_keygen[1907]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb 13 15:51:30.100438 polkitd[1982]: Started polkitd version 121
Feb 13 15:51:30.126514 coreos-metadata[1975]: Feb 13 15:51:30.126 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Feb 13 15:51:30.128923 coreos-metadata[1975]: Feb 13 15:51:30.128 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1
Feb 13 15:51:30.131999 coreos-metadata[1975]: Feb 13 15:51:30.131 INFO Fetch successful
Feb 13 15:51:30.131999 coreos-metadata[1975]: Feb 13 15:51:30.131 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1
Feb 13 15:51:30.132761 coreos-metadata[1975]: Feb 13 15:51:30.132 INFO Fetch successful
Feb 13 15:51:30.137456 unknown[1975]: wrote ssh authorized keys file for user: core
Feb 13 15:51:30.146705 amazon-ssm-agent[1934]: 2025-02-13 15:51:29 INFO http_proxy:
Feb 13 15:51:30.160157 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Feb 13 15:51:30.180033 systemd[1]: Starting issuegen.service - Generate /run/issue...
Feb 13 15:51:30.184740 polkitd[1982]: Loading rules from directory /etc/polkit-1/rules.d
Feb 13 15:51:30.184834 polkitd[1982]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 13 15:51:30.190453 systemd[1]: Started sshd@0-172.31.28.66:22-139.178.89.65:36066.service - OpenSSH per-connection server daemon (139.178.89.65:36066).
Feb 13 15:51:30.196946 polkitd[1982]: Finished loading, compiling and executing 2 rules
Feb 13 15:51:30.201882 dbus-daemon[1861]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Feb 13 15:51:30.206637 polkitd[1982]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Feb 13 15:51:30.203817 systemd[1]: Started polkit.service - Authorization Manager.
Feb 13 15:51:30.243368 systemd[1]: issuegen.service: Deactivated successfully.
Feb 13 15:51:30.243643 systemd[1]: Finished issuegen.service - Generate /run/issue.
Feb 13 15:51:30.246948 amazon-ssm-agent[1934]: 2025-02-13 15:51:29 INFO no_proxy:
Feb 13 15:51:30.255978 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Feb 13 15:51:30.260839 update-ssh-keys[2040]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 15:51:30.263634 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys).
Feb 13 15:51:30.275609 systemd[1]: Finished sshkeys.service.
Feb 13 15:51:30.321532 systemd-hostnamed[1905]: Hostname set to <ip-172-31-28-66> (transient)
Feb 13 15:51:30.321707 systemd-resolved[1689]: System hostname changed to 'ip-172-31-28-66'.
Feb 13 15:51:30.361778 amazon-ssm-agent[1934]: 2025-02-13 15:51:29 INFO Checking if agent identity type OnPrem can be assumed
Feb 13 15:51:30.386021 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Feb 13 15:51:30.409087 systemd[1]: Started getty@tty1.service - Getty on tty1.
Feb 13 15:51:30.437887 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0.
Feb 13 15:51:30.451281 systemd[1]: Reached target getty.target - Login Prompts.
Feb 13 15:51:30.472780 amazon-ssm-agent[1934]: 2025-02-13 15:51:29 INFO Checking if agent identity type EC2 can be assumed
Feb 13 15:51:30.566399 amazon-ssm-agent[1934]: 2025-02-13 15:51:30 INFO Agent will take identity from EC2
Feb 13 15:51:30.664202 amazon-ssm-agent[1934]: 2025-02-13 15:51:30 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 15:51:30.721730 sshd[2055]: Accepted publickey for core from 139.178.89.65 port 36066 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY
Feb 13 15:51:30.723874 sshd-session[2055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:51:30.748913 containerd[1886]: time="2025-02-13T15:51:30.746065547Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23
Feb 13 15:51:30.769687 amazon-ssm-agent[1934]: 2025-02-13 15:51:30 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 15:51:30.768556 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Feb 13 15:51:30.781471 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Feb 13 15:51:30.798958 systemd-logind[1870]: New session 1 of user core.
Feb 13 15:51:30.828633 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Feb 13 15:51:30.849373 systemd[1]: Starting user@500.service - User Manager for UID 500...
Feb 13 15:51:30.866797 amazon-ssm-agent[1934]: 2025-02-13 15:51:30 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 15:51:30.894834 (systemd)[2099]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb 13 15:51:30.950736 containerd[1886]: time="2025-02-13T15:51:30.947716734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:51:30.952219 containerd[1886]: time="2025-02-13T15:51:30.952073763Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:51:30.952219 containerd[1886]: time="2025-02-13T15:51:30.952131051Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb 13 15:51:30.952219 containerd[1886]: time="2025-02-13T15:51:30.952158219Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb 13 15:51:30.955476 containerd[1886]: time="2025-02-13T15:51:30.953979069Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Feb 13 15:51:30.955476 containerd[1886]: time="2025-02-13T15:51:30.954098561Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Feb 13 15:51:30.955476 containerd[1886]: time="2025-02-13T15:51:30.954189595Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:51:30.955476 containerd[1886]: time="2025-02-13T15:51:30.954211349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:51:30.955476 containerd[1886]: time="2025-02-13T15:51:30.954434358Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:51:30.955476 containerd[1886]: time="2025-02-13T15:51:30.954453477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb 13 15:51:30.955476 containerd[1886]: time="2025-02-13T15:51:30.954473971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:51:30.955476 containerd[1886]: time="2025-02-13T15:51:30.954487736Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb 13 15:51:30.955476 containerd[1886]: time="2025-02-13T15:51:30.954574381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:51:30.955476 containerd[1886]: time="2025-02-13T15:51:30.954982885Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:51:30.955476 containerd[1886]: time="2025-02-13T15:51:30.955146732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:51:30.956015 containerd[1886]: time="2025-02-13T15:51:30.955169064Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb 13 15:51:30.956015 containerd[1886]: time="2025-02-13T15:51:30.955277635Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb 13 15:51:30.956015 containerd[1886]: time="2025-02-13T15:51:30.955332182Z" level=info msg="metadata content store policy set" policy=shared
Feb 13 15:51:30.964108 amazon-ssm-agent[1934]: 2025-02-13 15:51:30 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0
Feb 13 15:51:30.975930 containerd[1886]: time="2025-02-13T15:51:30.975787858Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb 13 15:51:30.976874 containerd[1886]: time="2025-02-13T15:51:30.976167430Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb 13 15:51:30.976874 containerd[1886]: time="2025-02-13T15:51:30.976204220Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Feb 13 15:51:30.976874 containerd[1886]: time="2025-02-13T15:51:30.976518100Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Feb 13 15:51:30.976874 containerd[1886]: time="2025-02-13T15:51:30.976558942Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb 13 15:51:30.977170 containerd[1886]: time="2025-02-13T15:51:30.977112196Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb 13 15:51:30.979686 containerd[1886]: time="2025-02-13T15:51:30.979087147Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb 13 15:51:30.979686 containerd[1886]: time="2025-02-13T15:51:30.979283606Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Feb 13 15:51:30.979686 containerd[1886]: time="2025-02-13T15:51:30.979307777Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Feb 13 15:51:30.979686 containerd[1886]: time="2025-02-13T15:51:30.979331402Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Feb 13 15:51:30.979686 containerd[1886]: time="2025-02-13T15:51:30.979351828Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb 13 15:51:30.979686 containerd[1886]: time="2025-02-13T15:51:30.979371543Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb 13 15:51:30.979686 containerd[1886]: time="2025-02-13T15:51:30.979390295Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb 13 15:51:30.979686 containerd[1886]: time="2025-02-13T15:51:30.979414034Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb 13 15:51:30.979686 containerd[1886]: time="2025-02-13T15:51:30.979435669Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb 13 15:51:30.979686 containerd[1886]: time="2025-02-13T15:51:30.979457835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb 13 15:51:30.979686 containerd[1886]: time="2025-02-13T15:51:30.979475181Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb 13 15:51:30.979686 containerd[1886]: time="2025-02-13T15:51:30.979491038Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb 13 15:51:30.979686 containerd[1886]: time="2025-02-13T15:51:30.979521150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb 13 15:51:30.979686 containerd[1886]: time="2025-02-13T15:51:30.979541315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb 13 15:51:30.980280 containerd[1886]: time="2025-02-13T15:51:30.979558721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb 13 15:51:30.980280 containerd[1886]: time="2025-02-13T15:51:30.979577694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb 13 15:51:30.980280 containerd[1886]: time="2025-02-13T15:51:30.979594410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb 13 15:51:30.980280 containerd[1886]: time="2025-02-13T15:51:30.979612272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb 13 15:51:30.980280 containerd[1886]: time="2025-02-13T15:51:30.979628822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb 13 15:51:30.980930 containerd[1886]: time="2025-02-13T15:51:30.979649066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb 13 15:51:30.980930 containerd[1886]: time="2025-02-13T15:51:30.980515326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Feb 13 15:51:30.980930 containerd[1886]: time="2025-02-13T15:51:30.980749587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Feb 13 15:51:30.980930 containerd[1886]: time="2025-02-13T15:51:30.980772294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb 13 15:51:30.980930 containerd[1886]: time="2025-02-13T15:51:30.980789189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Feb 13 15:51:30.980930 containerd[1886]: time="2025-02-13T15:51:30.980868536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb 13 15:51:30.980930 containerd[1886]: time="2025-02-13T15:51:30.980893017Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Feb 13 15:51:30.982914 containerd[1886]: time="2025-02-13T15:51:30.981243464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Feb 13 15:51:30.982914 containerd[1886]: time="2025-02-13T15:51:30.981281063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb 13 15:51:30.982914 containerd[1886]: time="2025-02-13T15:51:30.982646768Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb 13 15:51:30.982914 containerd[1886]: time="2025-02-13T15:51:30.982781618Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb 13 15:51:30.983704 containerd[1886]: time="2025-02-13T15:51:30.982810607Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Feb 13 15:51:30.983704 containerd[1886]: time="2025-02-13T15:51:30.983272664Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb 13 15:51:30.983704 containerd[1886]: time="2025-02-13T15:51:30.983317092Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Feb 13 15:51:30.983704 containerd[1886]: time="2025-02-13T15:51:30.983335247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb 13 15:51:30.983704 containerd[1886]: time="2025-02-13T15:51:30.983358455Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Feb 13 15:51:30.983704 containerd[1886]: time="2025-02-13T15:51:30.983376638Z" level=info msg="NRI interface is disabled by configuration."
Feb 13 15:51:30.983704 containerd[1886]: time="2025-02-13T15:51:30.983397411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb 13 15:51:30.984009 containerd[1886]: time="2025-02-13T15:51:30.983803672Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb 13 15:51:30.984009 containerd[1886]: time="2025-02-13T15:51:30.983896474Z" level=info msg="Connect containerd service"
Feb 13 15:51:30.984009 containerd[1886]: time="2025-02-13T15:51:30.983980134Z" level=info msg="using legacy CRI server"
Feb 13 15:51:30.984009 containerd[1886]: time="2025-02-13T15:51:30.983991696Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Feb 13 15:51:30.985228 containerd[1886]: time="2025-02-13T15:51:30.985176040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb 13 15:51:30.987639 containerd[1886]: time="2025-02-13T15:51:30.987587496Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 15:51:30.988857 containerd[1886]: time="2025-02-13T15:51:30.988810176Z" level=info msg="Start subscribing containerd event"
Feb 13 15:51:30.988994 containerd[1886]: time="2025-02-13T15:51:30.988872696Z" level=info msg="Start recovering state"
Feb 13 15:51:30.991681 containerd[1886]: time="2025-02-13T15:51:30.988961223Z" level=info msg="Start event monitor"
Feb 13 15:51:30.991681 containerd[1886]: time="2025-02-13T15:51:30.990140082Z" level=info msg="Start snapshots syncer"
Feb 13 15:51:30.991681 containerd[1886]: time="2025-02-13T15:51:30.990161014Z" level=info msg="Start cni network conf syncer for default"
Feb 13 15:51:30.991681 containerd[1886]: time="2025-02-13T15:51:30.990174979Z" level=info msg="Start streaming server"
Feb 13 15:51:30.991872 containerd[1886]: time="2025-02-13T15:51:30.991684033Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 13 15:51:30.991872 containerd[1886]: time="2025-02-13T15:51:30.991752355Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 13 15:51:30.993062 systemd[1]: Started containerd.service - containerd container runtime.
Feb 13 15:51:31.002073 containerd[1886]: time="2025-02-13T15:51:31.002016924Z" level=info msg="containerd successfully booted in 0.257560s"
Feb 13 15:51:31.045759 amazon-ssm-agent[1934]: 2025-02-13 15:51:30 INFO [amazon-ssm-agent] OS: linux, Arch: amd64
Feb 13 15:51:31.045759 amazon-ssm-agent[1934]: 2025-02-13 15:51:30 INFO [amazon-ssm-agent] Starting Core Agent
Feb 13 15:51:31.045759 amazon-ssm-agent[1934]: 2025-02-13 15:51:30 INFO [amazon-ssm-agent] registrar detected. Attempting registration
Feb 13 15:51:31.045759 amazon-ssm-agent[1934]: 2025-02-13 15:51:30 INFO [Registrar] Starting registrar module
Feb 13 15:51:31.045759 amazon-ssm-agent[1934]: 2025-02-13 15:51:30 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration
Feb 13 15:51:31.045759 amazon-ssm-agent[1934]: 2025-02-13 15:51:30 INFO [EC2Identity] EC2 registration was successful.
Feb 13 15:51:31.045759 amazon-ssm-agent[1934]: 2025-02-13 15:51:31 INFO [CredentialRefresher] credentialRefresher has started
Feb 13 15:51:31.045759 amazon-ssm-agent[1934]: 2025-02-13 15:51:31 INFO [CredentialRefresher] Starting credentials refresher loop
Feb 13 15:51:31.045759 amazon-ssm-agent[1934]: 2025-02-13 15:51:31 INFO EC2RoleProvider Successfully connected with instance profile role credentials
Feb 13 15:51:31.063487 amazon-ssm-agent[1934]: 2025-02-13 15:51:31 INFO [CredentialRefresher] Next credential rotation will be in 32.391660588583335 minutes
Feb 13 15:51:31.125411 systemd[2099]: Queued start job for default target default.target.
Feb 13 15:51:31.142290 systemd[2099]: Created slice app.slice - User Application Slice.
Feb 13 15:51:31.145055 systemd[2099]: Reached target paths.target - Paths.
Feb 13 15:51:31.145082 systemd[2099]: Reached target timers.target - Timers.
Feb 13 15:51:31.165101 systemd[2099]: Starting dbus.socket - D-Bus User Message Bus Socket...
Feb 13 15:51:31.178467 systemd[2099]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Feb 13 15:51:31.178630 systemd[2099]: Reached target sockets.target - Sockets.
Feb 13 15:51:31.178672 systemd[2099]: Reached target basic.target - Basic System.
Feb 13 15:51:31.178737 systemd[2099]: Reached target default.target - Main User Target.
Feb 13 15:51:31.178776 systemd[2099]: Startup finished in 236ms.
Feb 13 15:51:31.179093 systemd[1]: Started user@500.service - User Manager for UID 500.
Feb 13 15:51:31.187939 systemd[1]: Started session-1.scope - Session 1 of User core.
Feb 13 15:51:31.362142 systemd[1]: Started sshd@1-172.31.28.66:22-139.178.89.65:36068.service - OpenSSH per-connection server daemon (139.178.89.65:36068).
Feb 13 15:51:31.582027 sshd[2112]: Accepted publickey for core from 139.178.89.65 port 36068 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY
Feb 13 15:51:31.589586 sshd-session[2112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:51:31.612955 systemd-logind[1870]: New session 2 of user core.
Feb 13 15:51:31.619124 systemd[1]: Started session-2.scope - Session 2 of User core.
Feb 13 15:51:31.746229 sshd[2114]: Connection closed by 139.178.89.65 port 36068
Feb 13 15:51:31.748583 sshd-session[2112]: pam_unix(sshd:session): session closed for user core
Feb 13 15:51:31.754340 systemd[1]: sshd@1-172.31.28.66:22-139.178.89.65:36068.service: Deactivated successfully.
Feb 13 15:51:31.757470 systemd[1]: session-2.scope: Deactivated successfully.
Feb 13 15:51:31.763509 systemd-logind[1870]: Session 2 logged out. Waiting for processes to exit.
Feb 13 15:51:31.765114 systemd-logind[1870]: Removed session 2.
Feb 13 15:51:31.789494 systemd[1]: Started sshd@2-172.31.28.66:22-139.178.89.65:36082.service - OpenSSH per-connection server daemon (139.178.89.65:36082).
Feb 13 15:51:31.974641 sshd[2119]: Accepted publickey for core from 139.178.89.65 port 36082 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY
Feb 13 15:51:31.976289 sshd-session[2119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:51:31.990945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:51:32.005606 systemd-logind[1870]: New session 3 of user core.
Feb 13 15:51:32.014968 (kubelet)[2126]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:51:32.015432 systemd[1]: Started session-3.scope - Session 3 of User core.
Feb 13 15:51:32.018594 systemd[1]: Reached target multi-user.target - Multi-User System.
Feb 13 15:51:32.021197 systemd[1]: Startup finished in 935ms (kernel) + 9.774s (initrd) + 8.545s (userspace) = 19.255s.
Feb 13 15:51:32.077796 ntpd[1865]: Listen normally on 6 eth0 [fe80::43d:b2ff:feff:fb55%2]:123
Feb 13 15:51:32.079574 ntpd[1865]: 13 Feb 15:51:32 ntpd[1865]: Listen normally on 6 eth0 [fe80::43d:b2ff:feff:fb55%2]:123
Feb 13 15:51:32.125046 amazon-ssm-agent[1934]: 2025-02-13 15:51:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process
Feb 13 15:51:32.212583 agetty[2079]: failed to open credentials directory
Feb 13 15:51:32.236755 amazon-ssm-agent[1934]: 2025-02-13 15:51:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2130) started
Feb 13 15:51:32.248416 agetty[2078]: failed to open credentials directory
Feb 13 15:51:32.321572 sshd[2127]: Connection closed by 139.178.89.65 port 36082
Feb 13 15:51:32.326356 sshd-session[2119]: pam_unix(sshd:session): session closed for user core
Feb 13 15:51:32.338530 amazon-ssm-agent[1934]: 2025-02-13 15:51:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds
Feb 13 15:51:32.338927 systemd[1]: sshd@2-172.31.28.66:22-139.178.89.65:36082.service: Deactivated successfully.
Feb 13 15:51:32.343504 systemd[1]: session-3.scope: Deactivated successfully.
Feb 13 15:51:32.346919 systemd-logind[1870]: Session 3 logged out. Waiting for processes to exit.
Feb 13 15:51:32.350190 systemd-logind[1870]: Removed session 3.
Feb 13 15:51:33.105026 kubelet[2126]: E0213 15:51:33.104938    2126 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:51:33.108015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:51:33.108217 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:51:33.108919 systemd[1]: kubelet.service: Consumed 1.048s CPU time.
Feb 13 15:51:36.656568 systemd-resolved[1689]: Clock change detected. Flushing caches.
Feb 13 15:51:42.940349 systemd[1]: Started sshd@3-172.31.28.66:22-139.178.89.65:54920.service - OpenSSH per-connection server daemon (139.178.89.65:54920).
Feb 13 15:51:43.180780 sshd[2154]: Accepted publickey for core from 139.178.89.65 port 54920 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY
Feb 13 15:51:43.186131 sshd-session[2154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:51:43.209759 systemd-logind[1870]: New session 4 of user core.
Feb 13 15:51:43.221400 systemd[1]: Started session-4.scope - Session 4 of User core.
Feb 13 15:51:43.362699 sshd[2156]: Connection closed by 139.178.89.65 port 54920
Feb 13 15:51:43.363797 sshd-session[2154]: pam_unix(sshd:session): session closed for user core
Feb 13 15:51:43.374288 systemd[1]: sshd@3-172.31.28.66:22-139.178.89.65:54920.service: Deactivated successfully.
Feb 13 15:51:43.378359 systemd[1]: session-4.scope: Deactivated successfully.
Feb 13 15:51:43.387666 systemd-logind[1870]: Session 4 logged out. Waiting for processes to exit.
Feb 13 15:51:43.409337 systemd[1]: Started sshd@4-172.31.28.66:22-139.178.89.65:54926.service - OpenSSH per-connection server daemon (139.178.89.65:54926).
Feb 13 15:51:43.413181 systemd-logind[1870]: Removed session 4.
Feb 13 15:51:43.603973 sshd[2161]: Accepted publickey for core from 139.178.89.65 port 54926 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY
Feb 13 15:51:43.605625 sshd-session[2161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:51:43.628499 systemd-logind[1870]: New session 5 of user core.
Feb 13 15:51:43.638102 systemd[1]: Started session-5.scope - Session 5 of User core.
Feb 13 15:51:43.765973 sshd[2163]: Connection closed by 139.178.89.65 port 54926
Feb 13 15:51:43.766703 sshd-session[2161]: pam_unix(sshd:session): session closed for user core
Feb 13 15:51:43.776988 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Feb 13 15:51:43.777879 systemd[1]: sshd@4-172.31.28.66:22-139.178.89.65:54926.service: Deactivated successfully.
Feb 13 15:51:43.783786 systemd[1]: session-5.scope: Deactivated successfully.
Feb 13 15:51:43.785838 systemd-logind[1870]: Session 5 logged out. Waiting for processes to exit.
Feb 13 15:51:43.798079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:51:43.816111 systemd-logind[1870]: Removed session 5.
Feb 13 15:51:43.824176 systemd[1]: Started sshd@5-172.31.28.66:22-139.178.89.65:54936.service - OpenSSH per-connection server daemon (139.178.89.65:54936).
Feb 13 15:51:43.998829 sshd[2171]: Accepted publickey for core from 139.178.89.65 port 54936 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY
Feb 13 15:51:44.003066 sshd-session[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:51:44.015357 systemd-logind[1870]: New session 6 of user core.
Feb 13 15:51:44.020003 systemd[1]: Started session-6.scope - Session 6 of User core.
Feb 13 15:51:44.150852 sshd[2173]: Connection closed by 139.178.89.65 port 54936
Feb 13 15:51:44.151871 sshd-session[2171]: pam_unix(sshd:session): session closed for user core
Feb 13 15:51:44.161277 systemd-logind[1870]: Session 6 logged out. Waiting for processes to exit.
Feb 13 15:51:44.162041 systemd[1]: sshd@5-172.31.28.66:22-139.178.89.65:54936.service: Deactivated successfully.
Feb 13 15:51:44.172868 systemd[1]: session-6.scope: Deactivated successfully.
Feb 13 15:51:44.189017 systemd-logind[1870]: Removed session 6.
Feb 13 15:51:44.194551 systemd[1]: Started sshd@6-172.31.28.66:22-139.178.89.65:54950.service - OpenSSH per-connection server daemon (139.178.89.65:54950).
Feb 13 15:51:44.293613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:51:44.307428 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:51:44.385170 sshd[2178]: Accepted publickey for core from 139.178.89.65 port 54950 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY
Feb 13 15:51:44.397868 sshd-session[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:51:44.424040 systemd-logind[1870]: New session 7 of user core.
Feb 13 15:51:44.444307 systemd[1]: Started session-7.scope - Session 7 of User core.
Feb 13 15:51:44.482756 kubelet[2185]: E0213 15:51:44.482560    2185 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:51:44.488748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:51:44.488949 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:51:44.588353 sudo[2194]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Feb 13 15:51:44.589532 sudo[2194]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:51:44.613060 sudo[2194]: pam_unix(sudo:session): session closed for user root
Feb 13 15:51:44.635419 sshd[2192]: Connection closed by 139.178.89.65 port 54950
Feb 13 15:51:44.636463 sshd-session[2178]: pam_unix(sshd:session): session closed for user core
Feb 13 15:51:44.646531 systemd[1]: sshd@6-172.31.28.66:22-139.178.89.65:54950.service: Deactivated successfully.
Feb 13 15:51:44.653194 systemd[1]: session-7.scope: Deactivated successfully.
Feb 13 15:51:44.656343 systemd-logind[1870]: Session 7 logged out. Waiting for processes to exit.
Feb 13 15:51:44.678399 systemd[1]: Started sshd@7-172.31.28.66:22-139.178.89.65:41448.service - OpenSSH per-connection server daemon (139.178.89.65:41448).
Feb 13 15:51:44.682267 systemd-logind[1870]: Removed session 7.
Feb 13 15:51:44.851015 sshd[2199]: Accepted publickey for core from 139.178.89.65 port 41448 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY
Feb 13 15:51:44.853205 sshd-session[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:51:44.862344 systemd-logind[1870]: New session 8 of user core.
Feb 13 15:51:44.871066 systemd[1]: Started session-8.scope - Session 8 of User core.
Feb 13 15:51:44.985437 sudo[2203]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Feb 13 15:51:44.985992 sudo[2203]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:51:44.992475 sudo[2203]: pam_unix(sudo:session): session closed for user root
Feb 13 15:51:45.001017 sudo[2202]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules
Feb 13 15:51:45.001411 sudo[2202]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:51:45.024393 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:51:45.065106 augenrules[2225]: No rules
Feb 13 15:51:45.067549 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:51:45.067820 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:51:45.070832 sudo[2202]: pam_unix(sudo:session): session closed for user root
Feb 13 15:51:45.093574 sshd[2201]: Connection closed by 139.178.89.65 port 41448
Feb 13 15:51:45.095282 sshd-session[2199]: pam_unix(sshd:session): session closed for user core
Feb 13 15:51:45.101086 systemd[1]: sshd@7-172.31.28.66:22-139.178.89.65:41448.service: Deactivated successfully.
Feb 13 15:51:45.104717 systemd[1]: session-8.scope: Deactivated successfully.
Feb 13 15:51:45.110880 systemd-logind[1870]: Session 8 logged out. Waiting for processes to exit.
Feb 13 15:51:45.116904 systemd-logind[1870]: Removed session 8.
Feb 13 15:51:45.144415 systemd[1]: Started sshd@8-172.31.28.66:22-139.178.89.65:41450.service - OpenSSH per-connection server daemon (139.178.89.65:41450).
Feb 13 15:51:45.350019 sshd[2233]: Accepted publickey for core from 139.178.89.65 port 41450 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY
Feb 13 15:51:45.353296 sshd-session[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:51:45.370469 systemd-logind[1870]: New session 9 of user core.
Feb 13 15:51:45.387258 systemd[1]: Started session-9.scope - Session 9 of User core.
Feb 13 15:51:45.514679 sudo[2236]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb 13 15:51:45.515119 sudo[2236]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:51:46.704576 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:51:46.712109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:51:46.760209 systemd[1]: Reloading requested from client PID 2275 ('systemctl') (unit session-9.scope)...
Feb 13 15:51:46.760231 systemd[1]: Reloading...
Feb 13 15:51:46.980783 zram_generator::config[2318]: No configuration found.
Feb 13 15:51:47.178976 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:51:47.381847 systemd[1]: Reloading finished in 620 ms.
Feb 13 15:51:47.506670 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:51:47.511971 systemd[1]: kubelet.service: Deactivated successfully.
Feb 13 15:51:47.512241 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:51:47.519283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:51:48.021122 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:51:48.031407 (kubelet)[2377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 15:51:48.114521 kubelet[2377]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:51:48.114521 kubelet[2377]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 13 15:51:48.114521 kubelet[2377]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:51:48.115001 kubelet[2377]: I0213 15:51:48.114591    2377 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 15:51:48.969360 kubelet[2377]: I0213 15:51:48.969323    2377 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Feb 13 15:51:48.969360 kubelet[2377]: I0213 15:51:48.969353    2377 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 15:51:48.969904 kubelet[2377]: I0213 15:51:48.969888    2377 server.go:919] "Client rotation is on, will bootstrap in background"
Feb 13 15:51:49.007757 kubelet[2377]: I0213 15:51:49.007155    2377 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 15:51:49.019616 kubelet[2377]: I0213 15:51:49.019528    2377 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 15:51:49.022099 kubelet[2377]: I0213 15:51:49.022051    2377 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 15:51:49.022382 kubelet[2377]: I0213 15:51:49.022352    2377 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb 13 15:51:49.023139 kubelet[2377]: I0213 15:51:49.022982    2377 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 15:51:49.023139 kubelet[2377]: I0213 15:51:49.023012    2377 container_manager_linux.go:301] "Creating device plugin manager"
Feb 13 15:51:49.023251 kubelet[2377]: I0213 15:51:49.023189    2377 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:51:49.024109 kubelet[2377]: I0213 15:51:49.023444    2377 kubelet.go:396] "Attempting to sync node with API server"
Feb 13 15:51:49.024109 kubelet[2377]: I0213 15:51:49.023466    2377 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 15:51:49.024109 kubelet[2377]: I0213 15:51:49.023499    2377 kubelet.go:312] "Adding apiserver pod source"
Feb 13 15:51:49.024109 kubelet[2377]: I0213 15:51:49.023514    2377 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 15:51:49.026523 kubelet[2377]: E0213 15:51:49.025627    2377 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:51:49.026523 kubelet[2377]: E0213 15:51:49.025896    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:51:49.027117 kubelet[2377]: I0213 15:51:49.027043    2377 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Feb 13 15:51:49.030979 kubelet[2377]: I0213 15:51:49.030936    2377 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 15:51:49.033326 kubelet[2377]: W0213 15:51:49.033291    2377 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb 13 15:51:49.035749 kubelet[2377]: I0213 15:51:49.034641    2377 server.go:1256] "Started kubelet"
Feb 13 15:51:49.036045 kubelet[2377]: I0213 15:51:49.036029    2377 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 15:51:49.036495 kubelet[2377]: I0213 15:51:49.036478    2377 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 15:51:49.036638 kubelet[2377]: I0213 15:51:49.036626    2377 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 15:51:49.037991 kubelet[2377]: I0213 15:51:49.037971    2377 server.go:461] "Adding debug handlers to kubelet server"
Feb 13 15:51:49.039329 kubelet[2377]: I0213 15:51:49.038006    2377 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 15:51:49.048900 kubelet[2377]: I0213 15:51:49.048864    2377 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb 13 15:51:49.049654 kubelet[2377]: I0213 15:51:49.049633    2377 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb 13 15:51:49.049902 kubelet[2377]: I0213 15:51:49.049875    2377 reconciler_new.go:29] "Reconciler: start to sync state"
Feb 13 15:51:49.052825 kubelet[2377]: I0213 15:51:49.051380    2377 factory.go:221] Registration of the systemd container factory successfully
Feb 13 15:51:49.052825 kubelet[2377]: I0213 15:51:49.051529    2377 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 15:51:49.054049 kubelet[2377]: I0213 15:51:49.053813    2377 factory.go:221] Registration of the containerd container factory successfully
Feb 13 15:51:49.057925 kubelet[2377]: W0213 15:51:49.057145    2377 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb 13 15:51:49.058491 kubelet[2377]: E0213 15:51:49.058473    2377 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb 13 15:51:49.058659 kubelet[2377]: E0213 15:51:49.058644    2377 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.28.66\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms"
Feb 13 15:51:49.065030 kubelet[2377]: E0213 15:51:49.064941    2377 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.28.66.1823cf63e454a9d3  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.28.66,UID:172.31.28.66,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.28.66,},FirstTimestamp:2025-02-13 15:51:49.034609107 +0000 UTC m=+0.995595094,LastTimestamp:2025-02-13 15:51:49.034609107 +0000 UTC m=+0.995595094,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.28.66,}"
Feb 13 15:51:49.065816 kubelet[2377]: W0213 15:51:49.065241    2377 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "172.31.28.66" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb 13 15:51:49.065816 kubelet[2377]: E0213 15:51:49.065309    2377 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.28.66" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb 13 15:51:49.065995 kubelet[2377]: W0213 15:51:49.065901    2377 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb 13 15:51:49.065995 kubelet[2377]: E0213 15:51:49.065925    2377 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb 13 15:51:49.071769 kubelet[2377]: E0213 15:51:49.067418    2377 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 15:51:49.076562 kubelet[2377]: I0213 15:51:49.076267    2377 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 13 15:51:49.076562 kubelet[2377]: I0213 15:51:49.076293    2377 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 13 15:51:49.076562 kubelet[2377]: I0213 15:51:49.076317    2377 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:51:49.080845 kubelet[2377]: I0213 15:51:49.080517    2377 policy_none.go:49] "None policy: Start"
Feb 13 15:51:49.082995 kubelet[2377]: I0213 15:51:49.082386    2377 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 13 15:51:49.082995 kubelet[2377]: I0213 15:51:49.082420    2377 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 15:51:49.093918 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Feb 13 15:51:49.114099 kubelet[2377]: E0213 15:51:49.114065    2377 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.28.66.1823cf63e648e5eb  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.28.66,UID:172.31.28.66,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.28.66,},FirstTimestamp:2025-02-13 15:51:49.067392491 +0000 UTC m=+1.028378467,LastTimestamp:2025-02-13 15:51:49.067392491 +0000 UTC m=+1.028378467,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.28.66,}"
Feb 13 15:51:49.116799 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Feb 13 15:51:49.125907 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Feb 13 15:51:49.128560 kubelet[2377]: E0213 15:51:49.128481    2377 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.28.66.1823cf63e6b8a99d  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.28.66,UID:172.31.28.66,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.28.66 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.28.66,},FirstTimestamp:2025-02-13 15:51:49.074717085 +0000 UTC m=+1.035703063,LastTimestamp:2025-02-13 15:51:49.074717085 +0000 UTC m=+1.035703063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.28.66,}"
Feb 13 15:51:49.136752 kubelet[2377]: I0213 15:51:49.134411    2377 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 15:51:49.136752 kubelet[2377]: I0213 15:51:49.134644    2377 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 15:51:49.145258 kubelet[2377]: E0213 15:51:49.145221    2377 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.28.66\" not found"
Feb 13 15:51:49.153008 kubelet[2377]: I0213 15:51:49.152970    2377 kubelet_node_status.go:73] "Attempting to register node" node="172.31.28.66"
Feb 13 15:51:49.159092 kubelet[2377]: E0213 15:51:49.159064    2377 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.28.66.1823cf63e6b91b7c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.28.66,UID:172.31.28.66,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 172.31.28.66 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:172.31.28.66,},FirstTimestamp:2025-02-13 15:51:49.074746236 +0000 UTC m=+1.035732199,LastTimestamp:2025-02-13 15:51:49.074746236 +0000 UTC m=+1.035732199,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.28.66,}"
Feb 13 15:51:49.162286 kubelet[2377]: E0213 15:51:49.162260    2377 kubelet_node_status.go:96] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.28.66"
Feb 13 15:51:49.193604 kubelet[2377]: E0213 15:51:49.192717    2377 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.28.66.1823cf63e6b92b1e  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.28.66,UID:172.31.28.66,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 172.31.28.66 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:172.31.28.66,},FirstTimestamp:2025-02-13 15:51:49.074750238 +0000 UTC m=+1.035736202,LastTimestamp:2025-02-13 15:51:49.074750238 +0000 UTC m=+1.035736202,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.28.66,}"
Feb 13 15:51:49.201265 kubelet[2377]: I0213 15:51:49.201218    2377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 15:51:49.203355 kubelet[2377]: I0213 15:51:49.203309    2377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 15:51:49.203355 kubelet[2377]: I0213 15:51:49.203357    2377 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 13 15:51:49.203542 kubelet[2377]: I0213 15:51:49.203382    2377 kubelet.go:2329] "Starting kubelet main sync loop"
Feb 13 15:51:49.203542 kubelet[2377]: E0213 15:51:49.203507    2377 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Feb 13 15:51:49.266056 kubelet[2377]: E0213 15:51:49.265922    2377 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.28.66\" not found" node="172.31.28.66"
Feb 13 15:51:49.364457 kubelet[2377]: I0213 15:51:49.364426    2377 kubelet_node_status.go:73] "Attempting to register node" node="172.31.28.66"
Feb 13 15:51:49.376504 kubelet[2377]: I0213 15:51:49.376449    2377 kubelet_node_status.go:76] "Successfully registered node" node="172.31.28.66"
Feb 13 15:51:49.412506 kubelet[2377]: E0213 15:51:49.412462    2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.66\" not found"
Feb 13 15:51:49.512875 kubelet[2377]: E0213 15:51:49.512820    2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.66\" not found"
Feb 13 15:51:49.613820 kubelet[2377]: E0213 15:51:49.613705    2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.66\" not found"
Feb 13 15:51:49.714511 kubelet[2377]: E0213 15:51:49.714442    2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.66\" not found"
Feb 13 15:51:49.815241 kubelet[2377]: E0213 15:51:49.815148    2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.66\" not found"
Feb 13 15:51:49.916164 kubelet[2377]: E0213 15:51:49.916039    2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.66\" not found"
Feb 13 15:51:49.972229 kubelet[2377]: I0213 15:51:49.972163    2377 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials"
Feb 13 15:51:49.972404 kubelet[2377]: W0213 15:51:49.972371    2377 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received
Feb 13 15:51:50.017096 kubelet[2377]: E0213 15:51:50.017046    2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.66\" not found"
Feb 13 15:51:50.027011 kubelet[2377]: E0213 15:51:50.026892    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:51:50.117831 kubelet[2377]: E0213 15:51:50.117790    2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.66\" not found"
Feb 13 15:51:50.222678 kubelet[2377]: I0213 15:51:50.222555    2377 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24"
Feb 13 15:51:50.223671 kubelet[2377]: I0213 15:51:50.223287    2377 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24"
Feb 13 15:51:50.223755 containerd[1886]: time="2025-02-13T15:51:50.222979755Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb 13 15:51:50.552004 sudo[2236]: pam_unix(sudo:session): session closed for user root
Feb 13 15:51:50.575911 sshd[2235]: Connection closed by 139.178.89.65 port 41450
Feb 13 15:51:50.576649 sshd-session[2233]: pam_unix(sshd:session): session closed for user core
Feb 13 15:51:50.581264 systemd[1]: sshd@8-172.31.28.66:22-139.178.89.65:41450.service: Deactivated successfully.
Feb 13 15:51:50.584559 systemd[1]: session-9.scope: Deactivated successfully.
Feb 13 15:51:50.585685 systemd-logind[1870]: Session 9 logged out. Waiting for processes to exit.
Feb 13 15:51:50.587088 systemd-logind[1870]: Removed session 9.
Feb 13 15:51:51.027536 kubelet[2377]: E0213 15:51:51.027492    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:51:51.027536 kubelet[2377]: I0213 15:51:51.027505    2377 apiserver.go:52] "Watching apiserver"
Feb 13 15:51:51.047643 kubelet[2377]: I0213 15:51:51.047013    2377 topology_manager.go:215] "Topology Admit Handler" podUID="f45ab1d1-11e9-4150-aeca-1c2c61a789dd" podNamespace="calico-system" podName="calico-node-ck9h2"
Feb 13 15:51:51.047643 kubelet[2377]: I0213 15:51:51.047489    2377 topology_manager.go:215] "Topology Admit Handler" podUID="c021f669-a6e0-4344-be54-42ff0a3b9776" podNamespace="calico-system" podName="csi-node-driver-hswh5"
Feb 13 15:51:51.047643 kubelet[2377]: I0213 15:51:51.047591    2377 topology_manager.go:215] "Topology Admit Handler" podUID="4e674936-f506-4172-b0a2-eac0e354d543" podNamespace="kube-system" podName="kube-proxy-hptb8"
Feb 13 15:51:51.049064 kubelet[2377]: E0213 15:51:51.048443    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hswh5" podUID="c021f669-a6e0-4344-be54-42ff0a3b9776"
Feb 13 15:51:51.053538 kubelet[2377]: I0213 15:51:51.053487    2377 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb 13 15:51:51.063187 systemd[1]: Created slice kubepods-besteffort-pod4e674936_f506_4172_b0a2_eac0e354d543.slice - libcontainer container kubepods-besteffort-pod4e674936_f506_4172_b0a2_eac0e354d543.slice.
Feb 13 15:51:51.064300 kubelet[2377]: I0213 15:51:51.063960    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhmc5\" (UniqueName: \"kubernetes.io/projected/c021f669-a6e0-4344-be54-42ff0a3b9776-kube-api-access-nhmc5\") pod \"csi-node-driver-hswh5\" (UID: \"c021f669-a6e0-4344-be54-42ff0a3b9776\") " pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:51:51.064300 kubelet[2377]: I0213 15:51:51.064038    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e674936-f506-4172-b0a2-eac0e354d543-lib-modules\") pod \"kube-proxy-hptb8\" (UID: \"4e674936-f506-4172-b0a2-eac0e354d543\") " pod="kube-system/kube-proxy-hptb8"
Feb 13 15:51:51.064300 kubelet[2377]: I0213 15:51:51.064095    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ft9f\" (UniqueName: \"kubernetes.io/projected/4e674936-f506-4172-b0a2-eac0e354d543-kube-api-access-6ft9f\") pod \"kube-proxy-hptb8\" (UID: \"4e674936-f506-4172-b0a2-eac0e354d543\") " pod="kube-system/kube-proxy-hptb8"
Feb 13 15:51:51.064300 kubelet[2377]: I0213 15:51:51.064127    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f45ab1d1-11e9-4150-aeca-1c2c61a789dd-lib-modules\") pod \"calico-node-ck9h2\" (UID: \"f45ab1d1-11e9-4150-aeca-1c2c61a789dd\") " pod="calico-system/calico-node-ck9h2"
Feb 13 15:51:51.064884 kubelet[2377]: I0213 15:51:51.064575    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f45ab1d1-11e9-4150-aeca-1c2c61a789dd-flexvol-driver-host\") pod \"calico-node-ck9h2\" (UID: \"f45ab1d1-11e9-4150-aeca-1c2c61a789dd\") " pod="calico-system/calico-node-ck9h2"
Feb 13 15:51:51.064884 kubelet[2377]: I0213 15:51:51.064651    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c021f669-a6e0-4344-be54-42ff0a3b9776-registration-dir\") pod \"csi-node-driver-hswh5\" (UID: \"c021f669-a6e0-4344-be54-42ff0a3b9776\") " pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:51:51.064884 kubelet[2377]: I0213 15:51:51.064686    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f45ab1d1-11e9-4150-aeca-1c2c61a789dd-node-certs\") pod \"calico-node-ck9h2\" (UID: \"f45ab1d1-11e9-4150-aeca-1c2c61a789dd\") " pod="calico-system/calico-node-ck9h2"
Feb 13 15:51:51.064884 kubelet[2377]: I0213 15:51:51.064744    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c021f669-a6e0-4344-be54-42ff0a3b9776-socket-dir\") pod \"csi-node-driver-hswh5\" (UID: \"c021f669-a6e0-4344-be54-42ff0a3b9776\") " pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:51:51.064884 kubelet[2377]: I0213 15:51:51.064809    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f45ab1d1-11e9-4150-aeca-1c2c61a789dd-cni-bin-dir\") pod \"calico-node-ck9h2\" (UID: \"f45ab1d1-11e9-4150-aeca-1c2c61a789dd\") " pod="calico-system/calico-node-ck9h2"
Feb 13 15:51:51.066055 kubelet[2377]: I0213 15:51:51.064845    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f45ab1d1-11e9-4150-aeca-1c2c61a789dd-cni-net-dir\") pod \"calico-node-ck9h2\" (UID: \"f45ab1d1-11e9-4150-aeca-1c2c61a789dd\") " pod="calico-system/calico-node-ck9h2"
Feb 13 15:51:51.066598 kubelet[2377]: I0213 15:51:51.066141    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e674936-f506-4172-b0a2-eac0e354d543-xtables-lock\") pod \"kube-proxy-hptb8\" (UID: \"4e674936-f506-4172-b0a2-eac0e354d543\") " pod="kube-system/kube-proxy-hptb8"
Feb 13 15:51:51.066598 kubelet[2377]: I0213 15:51:51.066221    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f45ab1d1-11e9-4150-aeca-1c2c61a789dd-xtables-lock\") pod \"calico-node-ck9h2\" (UID: \"f45ab1d1-11e9-4150-aeca-1c2c61a789dd\") " pod="calico-system/calico-node-ck9h2"
Feb 13 15:51:51.066598 kubelet[2377]: I0213 15:51:51.066256    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f45ab1d1-11e9-4150-aeca-1c2c61a789dd-tigera-ca-bundle\") pod \"calico-node-ck9h2\" (UID: \"f45ab1d1-11e9-4150-aeca-1c2c61a789dd\") " pod="calico-system/calico-node-ck9h2"
Feb 13 15:51:51.066598 kubelet[2377]: I0213 15:51:51.066307    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f45ab1d1-11e9-4150-aeca-1c2c61a789dd-var-lib-calico\") pod \"calico-node-ck9h2\" (UID: \"f45ab1d1-11e9-4150-aeca-1c2c61a789dd\") " pod="calico-system/calico-node-ck9h2"
Feb 13 15:51:51.066598 kubelet[2377]: I0213 15:51:51.066375    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f45ab1d1-11e9-4150-aeca-1c2c61a789dd-cni-log-dir\") pod \"calico-node-ck9h2\" (UID: \"f45ab1d1-11e9-4150-aeca-1c2c61a789dd\") " pod="calico-system/calico-node-ck9h2"
Feb 13 15:51:51.066950 kubelet[2377]: I0213 15:51:51.066406    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8wzj\" (UniqueName: \"kubernetes.io/projected/f45ab1d1-11e9-4150-aeca-1c2c61a789dd-kube-api-access-h8wzj\") pod \"calico-node-ck9h2\" (UID: \"f45ab1d1-11e9-4150-aeca-1c2c61a789dd\") " pod="calico-system/calico-node-ck9h2"
Feb 13 15:51:51.066950 kubelet[2377]: I0213 15:51:51.066461    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c021f669-a6e0-4344-be54-42ff0a3b9776-varrun\") pod \"csi-node-driver-hswh5\" (UID: \"c021f669-a6e0-4344-be54-42ff0a3b9776\") " pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:51:51.066950 kubelet[2377]: I0213 15:51:51.066520    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c021f669-a6e0-4344-be54-42ff0a3b9776-kubelet-dir\") pod \"csi-node-driver-hswh5\" (UID: \"c021f669-a6e0-4344-be54-42ff0a3b9776\") " pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:51:51.066950 kubelet[2377]: I0213 15:51:51.066551    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4e674936-f506-4172-b0a2-eac0e354d543-kube-proxy\") pod \"kube-proxy-hptb8\" (UID: \"4e674936-f506-4172-b0a2-eac0e354d543\") " pod="kube-system/kube-proxy-hptb8"
Feb 13 15:51:51.066950 kubelet[2377]: I0213 15:51:51.066577    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f45ab1d1-11e9-4150-aeca-1c2c61a789dd-policysync\") pod \"calico-node-ck9h2\" (UID: \"f45ab1d1-11e9-4150-aeca-1c2c61a789dd\") " pod="calico-system/calico-node-ck9h2"
Feb 13 15:51:51.067409 kubelet[2377]: I0213 15:51:51.066796    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f45ab1d1-11e9-4150-aeca-1c2c61a789dd-var-run-calico\") pod \"calico-node-ck9h2\" (UID: \"f45ab1d1-11e9-4150-aeca-1c2c61a789dd\") " pod="calico-system/calico-node-ck9h2"
Feb 13 15:51:51.111279 systemd[1]: Created slice kubepods-besteffort-podf45ab1d1_11e9_4150_aeca_1c2c61a789dd.slice - libcontainer container kubepods-besteffort-podf45ab1d1_11e9_4150_aeca_1c2c61a789dd.slice.
Feb 13 15:51:51.177950 kubelet[2377]: E0213 15:51:51.177892    2377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:51:51.177950 kubelet[2377]: W0213 15:51:51.177947    2377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:51:51.178236 kubelet[2377]: E0213 15:51:51.178004    2377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:51:51.179595 kubelet[2377]: E0213 15:51:51.178517    2377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:51:51.179595 kubelet[2377]: W0213 15:51:51.179081    2377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:51:51.179905 kubelet[2377]: E0213 15:51:51.179797    2377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:51:51.182619 kubelet[2377]: E0213 15:51:51.182586    2377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:51:51.183025 kubelet[2377]: W0213 15:51:51.182991    2377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:51:51.183166 kubelet[2377]: E0213 15:51:51.183043    2377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:51:51.186879 kubelet[2377]: E0213 15:51:51.186739    2377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:51:51.186879 kubelet[2377]: W0213 15:51:51.186867    2377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:51:51.187383 kubelet[2377]: E0213 15:51:51.187312    2377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:51:51.187808 kubelet[2377]: E0213 15:51:51.187785    2377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:51:51.187808 kubelet[2377]: W0213 15:51:51.187803    2377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:51:51.187952 kubelet[2377]: E0213 15:51:51.187833    2377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:51:51.189504 kubelet[2377]: E0213 15:51:51.188476    2377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:51:51.189504 kubelet[2377]: W0213 15:51:51.188492    2377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:51:51.189504 kubelet[2377]: E0213 15:51:51.188512    2377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:51:51.190527 kubelet[2377]: E0213 15:51:51.190408    2377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:51:51.190527 kubelet[2377]: W0213 15:51:51.190426    2377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:51:51.190527 kubelet[2377]: E0213 15:51:51.190452    2377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:51:51.203759 kubelet[2377]: E0213 15:51:51.202892    2377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:51:51.203759 kubelet[2377]: W0213 15:51:51.202918    2377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:51:51.203759 kubelet[2377]: E0213 15:51:51.202947    2377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:51:51.241830 kubelet[2377]: E0213 15:51:51.235854    2377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:51:51.241830 kubelet[2377]: W0213 15:51:51.235883    2377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:51:51.241830 kubelet[2377]: E0213 15:51:51.235910    2377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:51:51.253582 kubelet[2377]: E0213 15:51:51.251018    2377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:51:51.253582 kubelet[2377]: W0213 15:51:51.251063    2377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:51:51.253582 kubelet[2377]: E0213 15:51:51.251094    2377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:51:51.266781 kubelet[2377]: E0213 15:51:51.266122    2377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:51:51.266781 kubelet[2377]: W0213 15:51:51.266168    2377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:51:51.266781 kubelet[2377]: E0213 15:51:51.266198    2377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:51:51.396994 containerd[1886]: time="2025-02-13T15:51:51.396868673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hptb8,Uid:4e674936-f506-4172-b0a2-eac0e354d543,Namespace:kube-system,Attempt:0,}"
Feb 13 15:51:51.432016 containerd[1886]: time="2025-02-13T15:51:51.431965130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ck9h2,Uid:f45ab1d1-11e9-4150-aeca-1c2c61a789dd,Namespace:calico-system,Attempt:0,}"
Feb 13 15:51:52.028144 kubelet[2377]: E0213 15:51:52.028109    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:51:52.060306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2213047776.mount: Deactivated successfully.
Feb 13 15:51:52.083027 containerd[1886]: time="2025-02-13T15:51:52.082966079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:51:52.086622 containerd[1886]: time="2025-02-13T15:51:52.086573273Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:51:52.088221 containerd[1886]: time="2025-02-13T15:51:52.088164309Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056"
Feb 13 15:51:52.091191 containerd[1886]: time="2025-02-13T15:51:52.091117470Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 15:51:52.093093 containerd[1886]: time="2025-02-13T15:51:52.092964981Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:51:52.099523 containerd[1886]: time="2025-02-13T15:51:52.099478393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:51:52.102947 containerd[1886]: time="2025-02-13T15:51:52.101960229Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 669.879913ms"
Feb 13 15:51:52.104437 containerd[1886]: time="2025-02-13T15:51:52.104295293Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 707.298111ms"
Feb 13 15:51:52.648466 containerd[1886]: time="2025-02-13T15:51:52.624443914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:51:52.648466 containerd[1886]: time="2025-02-13T15:51:52.648082671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:51:52.648466 containerd[1886]: time="2025-02-13T15:51:52.648104638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:51:52.652587 containerd[1886]: time="2025-02-13T15:51:52.652431606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:51:52.664355 containerd[1886]: time="2025-02-13T15:51:52.660860236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:51:52.664355 containerd[1886]: time="2025-02-13T15:51:52.664299880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:51:52.672557 containerd[1886]: time="2025-02-13T15:51:52.664343168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:51:52.672557 containerd[1886]: time="2025-02-13T15:51:52.664479248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:51:53.035180 kubelet[2377]: E0213 15:51:53.035079    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:51:53.084528 systemd[1]: run-containerd-runc-k8s.io-a4ccfb4ca23f05ad7ffc7c3f3b679a18e43b22dc63a968ae6e1abea3be4009f7-runc.v0TydD.mount: Deactivated successfully.
Feb 13 15:51:53.103115 systemd[1]: Started cri-containerd-3a8cee4488d956909d18283395ead1189185245e3022cc6cff0f600ac673fa51.scope - libcontainer container 3a8cee4488d956909d18283395ead1189185245e3022cc6cff0f600ac673fa51.
Feb 13 15:51:53.107054 systemd[1]: Started cri-containerd-a4ccfb4ca23f05ad7ffc7c3f3b679a18e43b22dc63a968ae6e1abea3be4009f7.scope - libcontainer container a4ccfb4ca23f05ad7ffc7c3f3b679a18e43b22dc63a968ae6e1abea3be4009f7.
Feb 13 15:51:53.206804 kubelet[2377]: E0213 15:51:53.206137    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hswh5" podUID="c021f669-a6e0-4344-be54-42ff0a3b9776"
Feb 13 15:51:53.207312 containerd[1886]: time="2025-02-13T15:51:53.207248942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ck9h2,Uid:f45ab1d1-11e9-4150-aeca-1c2c61a789dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"a4ccfb4ca23f05ad7ffc7c3f3b679a18e43b22dc63a968ae6e1abea3be4009f7\""
Feb 13 15:51:53.212843 containerd[1886]: time="2025-02-13T15:51:53.212810033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hptb8,Uid:4e674936-f506-4172-b0a2-eac0e354d543,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a8cee4488d956909d18283395ead1189185245e3022cc6cff0f600ac673fa51\""
Feb 13 15:51:53.218192 containerd[1886]: time="2025-02-13T15:51:53.218140596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\""
Feb 13 15:51:54.035483 kubelet[2377]: E0213 15:51:54.035420    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:51:55.035619 kubelet[2377]: E0213 15:51:55.035534    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:51:55.089585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2040174384.mount: Deactivated successfully.
Feb 13 15:51:55.206166 kubelet[2377]: E0213 15:51:55.206131    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hswh5" podUID="c021f669-a6e0-4344-be54-42ff0a3b9776"
Feb 13 15:51:55.333819 containerd[1886]: time="2025-02-13T15:51:55.333668988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:51:55.335989 containerd[1886]: time="2025-02-13T15:51:55.335871661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343"
Feb 13 15:51:55.337376 containerd[1886]: time="2025-02-13T15:51:55.337168361Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:51:55.339946 containerd[1886]: time="2025-02-13T15:51:55.339907382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:51:55.340865 containerd[1886]: time="2025-02-13T15:51:55.340831245Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.122652691s"
Feb 13 15:51:55.341010 containerd[1886]: time="2025-02-13T15:51:55.340987033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\""
Feb 13 15:51:55.344144 containerd[1886]: time="2025-02-13T15:51:55.343793940Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\""
Feb 13 15:51:55.344892 containerd[1886]: time="2025-02-13T15:51:55.344862575Z" level=info msg="CreateContainer within sandbox \"a4ccfb4ca23f05ad7ffc7c3f3b679a18e43b22dc63a968ae6e1abea3be4009f7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}"
Feb 13 15:51:55.368133 containerd[1886]: time="2025-02-13T15:51:55.368079863Z" level=info msg="CreateContainer within sandbox \"a4ccfb4ca23f05ad7ffc7c3f3b679a18e43b22dc63a968ae6e1abea3be4009f7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"47a8ce313ce8fc0203d80c2084d7fa0d8dde8878c8695f0ddb6339a3d333ca3e\""
Feb 13 15:51:55.369405 containerd[1886]: time="2025-02-13T15:51:55.369366667Z" level=info msg="StartContainer for \"47a8ce313ce8fc0203d80c2084d7fa0d8dde8878c8695f0ddb6339a3d333ca3e\""
Feb 13 15:51:55.436503 systemd[1]: Started cri-containerd-47a8ce313ce8fc0203d80c2084d7fa0d8dde8878c8695f0ddb6339a3d333ca3e.scope - libcontainer container 47a8ce313ce8fc0203d80c2084d7fa0d8dde8878c8695f0ddb6339a3d333ca3e.
Feb 13 15:51:55.495543 containerd[1886]: time="2025-02-13T15:51:55.495258128Z" level=info msg="StartContainer for \"47a8ce313ce8fc0203d80c2084d7fa0d8dde8878c8695f0ddb6339a3d333ca3e\" returns successfully"
Feb 13 15:51:55.521289 systemd[1]: cri-containerd-47a8ce313ce8fc0203d80c2084d7fa0d8dde8878c8695f0ddb6339a3d333ca3e.scope: Deactivated successfully.
Feb 13 15:51:55.696263 containerd[1886]: time="2025-02-13T15:51:55.695400270Z" level=info msg="shim disconnected" id=47a8ce313ce8fc0203d80c2084d7fa0d8dde8878c8695f0ddb6339a3d333ca3e namespace=k8s.io
Feb 13 15:51:55.696263 containerd[1886]: time="2025-02-13T15:51:55.695476387Z" level=warning msg="cleaning up after shim disconnected" id=47a8ce313ce8fc0203d80c2084d7fa0d8dde8878c8695f0ddb6339a3d333ca3e namespace=k8s.io
Feb 13 15:51:55.696263 containerd[1886]: time="2025-02-13T15:51:55.695489834Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:51:55.986494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47a8ce313ce8fc0203d80c2084d7fa0d8dde8878c8695f0ddb6339a3d333ca3e-rootfs.mount: Deactivated successfully.
Feb 13 15:51:56.036114 kubelet[2377]: E0213 15:51:56.036051    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:51:56.896360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount556445321.mount: Deactivated successfully.
Feb 13 15:51:57.037176 kubelet[2377]: E0213 15:51:57.037125    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:51:57.208254 kubelet[2377]: E0213 15:51:57.208138    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hswh5" podUID="c021f669-a6e0-4344-be54-42ff0a3b9776"
Feb 13 15:51:57.523053 containerd[1886]: time="2025-02-13T15:51:57.522992958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:51:57.525617 containerd[1886]: time="2025-02-13T15:51:57.524610340Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=28620592"
Feb 13 15:51:57.528434 containerd[1886]: time="2025-02-13T15:51:57.528384670Z" level=info msg="ImageCreate event name:\"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:51:57.534295 containerd[1886]: time="2025-02-13T15:51:57.534222180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:51:57.537501 containerd[1886]: time="2025-02-13T15:51:57.534987348Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"28619611\" in 2.191156959s"
Feb 13 15:51:57.537501 containerd[1886]: time="2025-02-13T15:51:57.535030502Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\""
Feb 13 15:51:57.538749 containerd[1886]: time="2025-02-13T15:51:57.538688991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\""
Feb 13 15:51:57.539905 containerd[1886]: time="2025-02-13T15:51:57.539870608Z" level=info msg="CreateContainer within sandbox \"3a8cee4488d956909d18283395ead1189185245e3022cc6cff0f600ac673fa51\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb 13 15:51:57.563614 containerd[1886]: time="2025-02-13T15:51:57.563562626Z" level=info msg="CreateContainer within sandbox \"3a8cee4488d956909d18283395ead1189185245e3022cc6cff0f600ac673fa51\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2ed9c9b5e60513e576b4482d687c3c84ed08aa2a1ca7303ef5a6e661cba5a89a\""
Feb 13 15:51:57.564374 containerd[1886]: time="2025-02-13T15:51:57.564348413Z" level=info msg="StartContainer for \"2ed9c9b5e60513e576b4482d687c3c84ed08aa2a1ca7303ef5a6e661cba5a89a\""
Feb 13 15:51:57.605588 systemd[1]: run-containerd-runc-k8s.io-2ed9c9b5e60513e576b4482d687c3c84ed08aa2a1ca7303ef5a6e661cba5a89a-runc.yJBKGr.mount: Deactivated successfully.
Feb 13 15:51:57.611924 systemd[1]: Started cri-containerd-2ed9c9b5e60513e576b4482d687c3c84ed08aa2a1ca7303ef5a6e661cba5a89a.scope - libcontainer container 2ed9c9b5e60513e576b4482d687c3c84ed08aa2a1ca7303ef5a6e661cba5a89a.
Feb 13 15:51:57.653064 containerd[1886]: time="2025-02-13T15:51:57.652978776Z" level=info msg="StartContainer for \"2ed9c9b5e60513e576b4482d687c3c84ed08aa2a1ca7303ef5a6e661cba5a89a\" returns successfully"
Feb 13 15:51:58.039536 kubelet[2377]: E0213 15:51:58.038023    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:51:59.040766 kubelet[2377]: E0213 15:51:59.039329    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:51:59.206183 kubelet[2377]: E0213 15:51:59.205654    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hswh5" podUID="c021f669-a6e0-4344-be54-42ff0a3b9776"
Feb 13 15:52:00.040824 kubelet[2377]: E0213 15:52:00.040416    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:00.938714 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 13 15:52:01.040750 kubelet[2377]: E0213 15:52:01.040698    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:01.207187 kubelet[2377]: E0213 15:52:01.206254    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hswh5" podUID="c021f669-a6e0-4344-be54-42ff0a3b9776"
Feb 13 15:52:02.044643 kubelet[2377]: E0213 15:52:02.044116    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:02.958710 containerd[1886]: time="2025-02-13T15:52:02.958656131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:02.960218 containerd[1886]: time="2025-02-13T15:52:02.960156166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154"
Feb 13 15:52:02.962000 containerd[1886]: time="2025-02-13T15:52:02.961938251Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:02.965549 containerd[1886]: time="2025-02-13T15:52:02.965387983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:02.966850 containerd[1886]: time="2025-02-13T15:52:02.966232091Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.427483468s"
Feb 13 15:52:02.966850 containerd[1886]: time="2025-02-13T15:52:02.966276117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\""
Feb 13 15:52:02.968661 containerd[1886]: time="2025-02-13T15:52:02.968621123Z" level=info msg="CreateContainer within sandbox \"a4ccfb4ca23f05ad7ffc7c3f3b679a18e43b22dc63a968ae6e1abea3be4009f7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Feb 13 15:52:03.016766 containerd[1886]: time="2025-02-13T15:52:03.016687970Z" level=info msg="CreateContainer within sandbox \"a4ccfb4ca23f05ad7ffc7c3f3b679a18e43b22dc63a968ae6e1abea3be4009f7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bada76ee4c493d19f2b94dfc5ff7b4fda88b2db94d3972c4185482f22ff62c20\""
Feb 13 15:52:03.018005 containerd[1886]: time="2025-02-13T15:52:03.017884025Z" level=info msg="StartContainer for \"bada76ee4c493d19f2b94dfc5ff7b4fda88b2db94d3972c4185482f22ff62c20\""
Feb 13 15:52:03.045447 kubelet[2377]: E0213 15:52:03.045223    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:03.081717 systemd[1]: run-containerd-runc-k8s.io-bada76ee4c493d19f2b94dfc5ff7b4fda88b2db94d3972c4185482f22ff62c20-runc.bOzOUq.mount: Deactivated successfully.
Feb 13 15:52:03.104087 systemd[1]: Started cri-containerd-bada76ee4c493d19f2b94dfc5ff7b4fda88b2db94d3972c4185482f22ff62c20.scope - libcontainer container bada76ee4c493d19f2b94dfc5ff7b4fda88b2db94d3972c4185482f22ff62c20.
Feb 13 15:52:03.149113 containerd[1886]: time="2025-02-13T15:52:03.149062225Z" level=info msg="StartContainer for \"bada76ee4c493d19f2b94dfc5ff7b4fda88b2db94d3972c4185482f22ff62c20\" returns successfully"
Feb 13 15:52:03.204498 kubelet[2377]: E0213 15:52:03.204457    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hswh5" podUID="c021f669-a6e0-4344-be54-42ff0a3b9776"
Feb 13 15:52:03.397238 kubelet[2377]: I0213 15:52:03.397182    2377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hptb8" podStartSLOduration=10.075432913 podStartE2EDuration="14.397085981s" podCreationTimestamp="2025-02-13 15:51:49 +0000 UTC" firstStartedPulling="2025-02-13 15:51:53.21565949 +0000 UTC m=+5.176645455" lastFinishedPulling="2025-02-13 15:51:57.537312544 +0000 UTC m=+9.498298523" observedRunningTime="2025-02-13 15:51:58.366667336 +0000 UTC m=+10.327653320" watchObservedRunningTime="2025-02-13 15:52:03.397085981 +0000 UTC m=+15.358072010"
Feb 13 15:52:04.045811 kubelet[2377]: E0213 15:52:04.045750    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:04.092406 containerd[1886]: time="2025-02-13T15:52:04.092154420Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE         \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 15:52:04.097271 systemd[1]: cri-containerd-bada76ee4c493d19f2b94dfc5ff7b4fda88b2db94d3972c4185482f22ff62c20.scope: Deactivated successfully.
Feb 13 15:52:04.123977 kubelet[2377]: I0213 15:52:04.123945    2377 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Feb 13 15:52:04.155343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bada76ee4c493d19f2b94dfc5ff7b4fda88b2db94d3972c4185482f22ff62c20-rootfs.mount: Deactivated successfully.
Feb 13 15:52:04.934311 containerd[1886]: time="2025-02-13T15:52:04.934214384Z" level=info msg="shim disconnected" id=bada76ee4c493d19f2b94dfc5ff7b4fda88b2db94d3972c4185482f22ff62c20 namespace=k8s.io
Feb 13 15:52:04.934311 containerd[1886]: time="2025-02-13T15:52:04.934296449Z" level=warning msg="cleaning up after shim disconnected" id=bada76ee4c493d19f2b94dfc5ff7b4fda88b2db94d3972c4185482f22ff62c20 namespace=k8s.io
Feb 13 15:52:04.934311 containerd[1886]: time="2025-02-13T15:52:04.934310479Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:52:05.046218 kubelet[2377]: E0213 15:52:05.046158    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:05.220502 systemd[1]: Created slice kubepods-besteffort-podc021f669_a6e0_4344_be54_42ff0a3b9776.slice - libcontainer container kubepods-besteffort-podc021f669_a6e0_4344_be54_42ff0a3b9776.slice.
Feb 13 15:52:05.234024 containerd[1886]: time="2025-02-13T15:52:05.233810902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:0,}"
Feb 13 15:52:05.330104 containerd[1886]: time="2025-02-13T15:52:05.328660762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\""
Feb 13 15:52:05.379205 containerd[1886]: time="2025-02-13T15:52:05.379141717Z" level=error msg="Failed to destroy network for sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:05.380531 containerd[1886]: time="2025-02-13T15:52:05.380474158Z" level=error msg="encountered an error cleaning up failed sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:05.383637 containerd[1886]: time="2025-02-13T15:52:05.380582306Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:05.383201 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad-shm.mount: Deactivated successfully.
Feb 13 15:52:05.384069 kubelet[2377]: E0213 15:52:05.380915    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:05.384069 kubelet[2377]: E0213 15:52:05.380985    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:05.384069 kubelet[2377]: E0213 15:52:05.381015    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:05.384234 kubelet[2377]: E0213 15:52:05.381080    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hswh5" podUID="c021f669-a6e0-4344-be54-42ff0a3b9776"
Feb 13 15:52:05.766073 kubelet[2377]: I0213 15:52:05.766026    2377 topology_manager.go:215] "Topology Admit Handler" podUID="d5b6846b-905c-4b4d-ab65-740bbfe8f5eb" podNamespace="default" podName="nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:05.773508 systemd[1]: Created slice kubepods-besteffort-podd5b6846b_905c_4b4d_ab65_740bbfe8f5eb.slice - libcontainer container kubepods-besteffort-podd5b6846b_905c_4b4d_ab65_740bbfe8f5eb.slice.
Feb 13 15:52:05.921635 kubelet[2377]: I0213 15:52:05.921550    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdgch\" (UniqueName: \"kubernetes.io/projected/d5b6846b-905c-4b4d-ab65-740bbfe8f5eb-kube-api-access-qdgch\") pod \"nginx-deployment-6d5f899847-2dzlr\" (UID: \"d5b6846b-905c-4b4d-ab65-740bbfe8f5eb\") " pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:06.047900 kubelet[2377]: E0213 15:52:06.047839    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:06.079055 containerd[1886]: time="2025-02-13T15:52:06.078688586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:0,}"
Feb 13 15:52:06.296650 containerd[1886]: time="2025-02-13T15:52:06.296595956Z" level=error msg="Failed to destroy network for sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:06.300766 containerd[1886]: time="2025-02-13T15:52:06.297003510Z" level=error msg="encountered an error cleaning up failed sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:06.300766 containerd[1886]: time="2025-02-13T15:52:06.297077224Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:06.301007 kubelet[2377]: E0213 15:52:06.300908    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:06.301007 kubelet[2377]: E0213 15:52:06.300972    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:06.301007 kubelet[2377]: E0213 15:52:06.301003    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:06.301139 kubelet[2377]: E0213 15:52:06.301067    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-2dzlr" podUID="d5b6846b-905c-4b4d-ab65-740bbfe8f5eb"
Feb 13 15:52:06.301929 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab-shm.mount: Deactivated successfully.
Feb 13 15:52:06.339494 kubelet[2377]: I0213 15:52:06.339323    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad"
Feb 13 15:52:06.342233 containerd[1886]: time="2025-02-13T15:52:06.342085559Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\""
Feb 13 15:52:06.345925 kubelet[2377]: I0213 15:52:06.345696    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab"
Feb 13 15:52:06.354709 containerd[1886]: time="2025-02-13T15:52:06.349042253Z" level=info msg="Ensure that sandbox a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad in task-service has been cleanup successfully"
Feb 13 15:52:06.354718 systemd[1]: run-netns-cni\x2de21aa7cc\x2de24c\x2d575e\x2d8fd5\x2d01a733f3c231.mount: Deactivated successfully.
Feb 13 15:52:06.359222 containerd[1886]: time="2025-02-13T15:52:06.357893127Z" level=info msg="TearDown network for sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" successfully"
Feb 13 15:52:06.359222 containerd[1886]: time="2025-02-13T15:52:06.357935398Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" returns successfully"
Feb 13 15:52:06.359222 containerd[1886]: time="2025-02-13T15:52:06.358800484Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\""
Feb 13 15:52:06.359222 containerd[1886]: time="2025-02-13T15:52:06.359051577Z" level=info msg="Ensure that sandbox 1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab in task-service has been cleanup successfully"
Feb 13 15:52:06.359222 containerd[1886]: time="2025-02-13T15:52:06.359067529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:1,}"
Feb 13 15:52:06.366787 containerd[1886]: time="2025-02-13T15:52:06.366556854Z" level=info msg="TearDown network for sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" successfully"
Feb 13 15:52:06.366787 containerd[1886]: time="2025-02-13T15:52:06.366626925Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" returns successfully"
Feb 13 15:52:06.378775 containerd[1886]: time="2025-02-13T15:52:06.378552402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:1,}"
Feb 13 15:52:06.378592 systemd[1]: run-netns-cni\x2d914e911e\x2db749\x2d50e1\x2df04c\x2d89802caaf5c8.mount: Deactivated successfully.
Feb 13 15:52:06.544818 containerd[1886]: time="2025-02-13T15:52:06.544763931Z" level=error msg="Failed to destroy network for sandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:06.546108 containerd[1886]: time="2025-02-13T15:52:06.546064489Z" level=error msg="encountered an error cleaning up failed sandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:06.547165 containerd[1886]: time="2025-02-13T15:52:06.546145835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:06.547247 kubelet[2377]: E0213 15:52:06.546774    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:06.547247 kubelet[2377]: E0213 15:52:06.546837    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:06.547247 kubelet[2377]: E0213 15:52:06.546869    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:06.547377 kubelet[2377]: E0213 15:52:06.546933    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hswh5" podUID="c021f669-a6e0-4344-be54-42ff0a3b9776"
Feb 13 15:52:06.548332 containerd[1886]: time="2025-02-13T15:52:06.548284986Z" level=error msg="Failed to destroy network for sandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:06.548794 containerd[1886]: time="2025-02-13T15:52:06.548717002Z" level=error msg="encountered an error cleaning up failed sandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:06.548887 containerd[1886]: time="2025-02-13T15:52:06.548834493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:06.549405 kubelet[2377]: E0213 15:52:06.549074    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:06.549405 kubelet[2377]: E0213 15:52:06.549118    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:06.549405 kubelet[2377]: E0213 15:52:06.549137    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:06.549529 kubelet[2377]: E0213 15:52:06.549198    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-2dzlr" podUID="d5b6846b-905c-4b4d-ab65-740bbfe8f5eb"
Feb 13 15:52:07.048946 kubelet[2377]: E0213 15:52:07.048893    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:07.257293 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece-shm.mount: Deactivated successfully.
Feb 13 15:52:07.352666 kubelet[2377]: I0213 15:52:07.351839    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184"
Feb 13 15:52:07.354085 containerd[1886]: time="2025-02-13T15:52:07.354049445Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\""
Feb 13 15:52:07.359615 containerd[1886]: time="2025-02-13T15:52:07.357037773Z" level=info msg="Ensure that sandbox bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184 in task-service has been cleanup successfully"
Feb 13 15:52:07.360763 containerd[1886]: time="2025-02-13T15:52:07.360285866Z" level=info msg="TearDown network for sandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" successfully"
Feb 13 15:52:07.364509 containerd[1886]: time="2025-02-13T15:52:07.360903157Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" returns successfully"
Feb 13 15:52:07.361547 systemd[1]: run-netns-cni\x2d1ca13f8c\x2de98e\x2d5249\x2df3ca\x2d924c5fb97837.mount: Deactivated successfully.
Feb 13 15:52:07.369162 containerd[1886]: time="2025-02-13T15:52:07.367547558Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\""
Feb 13 15:52:07.369162 containerd[1886]: time="2025-02-13T15:52:07.367681297Z" level=info msg="TearDown network for sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" successfully"
Feb 13 15:52:07.369162 containerd[1886]: time="2025-02-13T15:52:07.367696187Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" returns successfully"
Feb 13 15:52:07.375628 containerd[1886]: time="2025-02-13T15:52:07.375578670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:2,}"
Feb 13 15:52:07.387411 kubelet[2377]: I0213 15:52:07.387245    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece"
Feb 13 15:52:07.390859 containerd[1886]: time="2025-02-13T15:52:07.390516043Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\""
Feb 13 15:52:07.391553 containerd[1886]: time="2025-02-13T15:52:07.391342272Z" level=info msg="Ensure that sandbox 0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece in task-service has been cleanup successfully"
Feb 13 15:52:07.392090 containerd[1886]: time="2025-02-13T15:52:07.391836566Z" level=info msg="TearDown network for sandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" successfully"
Feb 13 15:52:07.392090 containerd[1886]: time="2025-02-13T15:52:07.391857918Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" returns successfully"
Feb 13 15:52:07.392822 containerd[1886]: time="2025-02-13T15:52:07.392798864Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\""
Feb 13 15:52:07.406408 containerd[1886]: time="2025-02-13T15:52:07.406354515Z" level=info msg="TearDown network for sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" successfully"
Feb 13 15:52:07.408387 systemd[1]: run-netns-cni\x2dbdabecab\x2d5c4d\x2dea3a\x2dc495\x2d244d38d1857d.mount: Deactivated successfully.
Feb 13 15:52:07.410597 containerd[1886]: time="2025-02-13T15:52:07.408888942Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" returns successfully"
Feb 13 15:52:07.411026 containerd[1886]: time="2025-02-13T15:52:07.410648163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:2,}"
Feb 13 15:52:07.651978 containerd[1886]: time="2025-02-13T15:52:07.651830043Z" level=error msg="Failed to destroy network for sandbox \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:07.653218 containerd[1886]: time="2025-02-13T15:52:07.653164655Z" level=error msg="encountered an error cleaning up failed sandbox \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:07.653521 containerd[1886]: time="2025-02-13T15:52:07.653438871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:07.653842 kubelet[2377]: E0213 15:52:07.653790    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:07.653842 kubelet[2377]: E0213 15:52:07.653853    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:07.654205 kubelet[2377]: E0213 15:52:07.653885    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:07.654205 kubelet[2377]: E0213 15:52:07.653951    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-2dzlr" podUID="d5b6846b-905c-4b4d-ab65-740bbfe8f5eb"
Feb 13 15:52:07.661881 containerd[1886]: time="2025-02-13T15:52:07.661687041Z" level=error msg="Failed to destroy network for sandbox \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:07.662589 containerd[1886]: time="2025-02-13T15:52:07.662203613Z" level=error msg="encountered an error cleaning up failed sandbox \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:07.662589 containerd[1886]: time="2025-02-13T15:52:07.662285502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:07.663854 kubelet[2377]: E0213 15:52:07.662612    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:07.663854 kubelet[2377]: E0213 15:52:07.662676    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:07.663854 kubelet[2377]: E0213 15:52:07.662707    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:07.664531 kubelet[2377]: E0213 15:52:07.663232    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hswh5" podUID="c021f669-a6e0-4344-be54-42ff0a3b9776"
Feb 13 15:52:08.049676 kubelet[2377]: E0213 15:52:08.049621    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:08.253405 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5-shm.mount: Deactivated successfully.
Feb 13 15:52:08.392119 kubelet[2377]: I0213 15:52:08.391992    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5"
Feb 13 15:52:08.393394 containerd[1886]: time="2025-02-13T15:52:08.393047896Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\""
Feb 13 15:52:08.395270 containerd[1886]: time="2025-02-13T15:52:08.393410743Z" level=info msg="Ensure that sandbox be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5 in task-service has been cleanup successfully"
Feb 13 15:52:08.395270 containerd[1886]: time="2025-02-13T15:52:08.393677634Z" level=info msg="TearDown network for sandbox \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" successfully"
Feb 13 15:52:08.395270 containerd[1886]: time="2025-02-13T15:52:08.393696465Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" returns successfully"
Feb 13 15:52:08.396481 systemd[1]: run-netns-cni\x2d1f510e44\x2dfab9\x2d54e2\x2d69f5\x2dcc9651ab1ec9.mount: Deactivated successfully.
Feb 13 15:52:08.399043 containerd[1886]: time="2025-02-13T15:52:08.397110826Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\""
Feb 13 15:52:08.399043 containerd[1886]: time="2025-02-13T15:52:08.397299025Z" level=info msg="TearDown network for sandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" successfully"
Feb 13 15:52:08.399043 containerd[1886]: time="2025-02-13T15:52:08.397316222Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" returns successfully"
Feb 13 15:52:08.399043 containerd[1886]: time="2025-02-13T15:52:08.398170573Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\""
Feb 13 15:52:08.399043 containerd[1886]: time="2025-02-13T15:52:08.398346259Z" level=info msg="TearDown network for sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" successfully"
Feb 13 15:52:08.399043 containerd[1886]: time="2025-02-13T15:52:08.398364528Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" returns successfully"
Feb 13 15:52:08.400258 containerd[1886]: time="2025-02-13T15:52:08.400001105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:3,}"
Feb 13 15:52:08.401158 kubelet[2377]: I0213 15:52:08.401130    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273"
Feb 13 15:52:08.401796 containerd[1886]: time="2025-02-13T15:52:08.401659823Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\""
Feb 13 15:52:08.402082 containerd[1886]: time="2025-02-13T15:52:08.402055301Z" level=info msg="Ensure that sandbox 507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273 in task-service has been cleanup successfully"
Feb 13 15:52:08.402369 containerd[1886]: time="2025-02-13T15:52:08.402323822Z" level=info msg="TearDown network for sandbox \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" successfully"
Feb 13 15:52:08.402369 containerd[1886]: time="2025-02-13T15:52:08.402349854Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" returns successfully"
Feb 13 15:52:08.407389 containerd[1886]: time="2025-02-13T15:52:08.402875239Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\""
Feb 13 15:52:08.407389 containerd[1886]: time="2025-02-13T15:52:08.403460187Z" level=info msg="TearDown network for sandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" successfully"
Feb 13 15:52:08.407389 containerd[1886]: time="2025-02-13T15:52:08.403480381Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" returns successfully"
Feb 13 15:52:08.406566 systemd[1]: run-netns-cni\x2dca54968f\x2dcb11\x2dc4a9\x2d1642\x2dd974d5345d9c.mount: Deactivated successfully.
Feb 13 15:52:08.407841 containerd[1886]: time="2025-02-13T15:52:08.407431196Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\""
Feb 13 15:52:08.407841 containerd[1886]: time="2025-02-13T15:52:08.407581607Z" level=info msg="TearDown network for sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" successfully"
Feb 13 15:52:08.407841 containerd[1886]: time="2025-02-13T15:52:08.407619448Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" returns successfully"
Feb 13 15:52:08.408757 containerd[1886]: time="2025-02-13T15:52:08.408671145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:3,}"
Feb 13 15:52:08.773180 containerd[1886]: time="2025-02-13T15:52:08.771135933Z" level=error msg="Failed to destroy network for sandbox \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:08.773668 containerd[1886]: time="2025-02-13T15:52:08.773620990Z" level=error msg="encountered an error cleaning up failed sandbox \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:08.773969 containerd[1886]: time="2025-02-13T15:52:08.773933332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:08.774801 kubelet[2377]: E0213 15:52:08.774347    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:08.774801 kubelet[2377]: E0213 15:52:08.774419    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:08.774801 kubelet[2377]: E0213 15:52:08.774450    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:08.774992 kubelet[2377]: E0213 15:52:08.774522    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hswh5" podUID="c021f669-a6e0-4344-be54-42ff0a3b9776"
Feb 13 15:52:08.810789 containerd[1886]: time="2025-02-13T15:52:08.810713822Z" level=error msg="Failed to destroy network for sandbox \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:08.811096 containerd[1886]: time="2025-02-13T15:52:08.811059061Z" level=error msg="encountered an error cleaning up failed sandbox \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:08.811176 containerd[1886]: time="2025-02-13T15:52:08.811132539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:08.811821 kubelet[2377]: E0213 15:52:08.811442    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:08.811821 kubelet[2377]: E0213 15:52:08.811494    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:08.811821 kubelet[2377]: E0213 15:52:08.811514    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:08.812041 kubelet[2377]: E0213 15:52:08.811564    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-2dzlr" podUID="d5b6846b-905c-4b4d-ab65-740bbfe8f5eb"
Feb 13 15:52:09.024111 kubelet[2377]: E0213 15:52:09.023959    2377 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:09.052823 kubelet[2377]: E0213 15:52:09.052557    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:09.262285 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079-shm.mount: Deactivated successfully.
Feb 13 15:52:09.409879 kubelet[2377]: I0213 15:52:09.409216    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079"
Feb 13 15:52:09.410978 containerd[1886]: time="2025-02-13T15:52:09.410770519Z" level=info msg="StopPodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\""
Feb 13 15:52:09.411424 containerd[1886]: time="2025-02-13T15:52:09.411022231Z" level=info msg="Ensure that sandbox 5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079 in task-service has been cleanup successfully"
Feb 13 15:52:09.414307 containerd[1886]: time="2025-02-13T15:52:09.414266084Z" level=info msg="TearDown network for sandbox \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" successfully"
Feb 13 15:52:09.416107 containerd[1886]: time="2025-02-13T15:52:09.414491405Z" level=info msg="StopPodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" returns successfully"
Feb 13 15:52:09.415021 systemd[1]: run-netns-cni\x2d104725a4\x2d73a8\x2d8653\x2da2f7\x2dcb9df2e05aef.mount: Deactivated successfully.
Feb 13 15:52:09.417312 containerd[1886]: time="2025-02-13T15:52:09.417278635Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\""
Feb 13 15:52:09.417457 containerd[1886]: time="2025-02-13T15:52:09.417398713Z" level=info msg="TearDown network for sandbox \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" successfully"
Feb 13 15:52:09.417457 containerd[1886]: time="2025-02-13T15:52:09.417417741Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" returns successfully"
Feb 13 15:52:09.419255 containerd[1886]: time="2025-02-13T15:52:09.418567518Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\""
Feb 13 15:52:09.419255 containerd[1886]: time="2025-02-13T15:52:09.418680648Z" level=info msg="TearDown network for sandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" successfully"
Feb 13 15:52:09.419255 containerd[1886]: time="2025-02-13T15:52:09.418760091Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" returns successfully"
Feb 13 15:52:09.419440 containerd[1886]: time="2025-02-13T15:52:09.419333273Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\""
Feb 13 15:52:09.419440 containerd[1886]: time="2025-02-13T15:52:09.419432199Z" level=info msg="TearDown network for sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" successfully"
Feb 13 15:52:09.419528 containerd[1886]: time="2025-02-13T15:52:09.419446805Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" returns successfully"
Feb 13 15:52:09.420387 containerd[1886]: time="2025-02-13T15:52:09.420352210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:4,}"
Feb 13 15:52:09.422626 kubelet[2377]: I0213 15:52:09.422590    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2"
Feb 13 15:52:09.425376 containerd[1886]: time="2025-02-13T15:52:09.424539851Z" level=info msg="StopPodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\""
Feb 13 15:52:09.425376 containerd[1886]: time="2025-02-13T15:52:09.425102413Z" level=info msg="Ensure that sandbox 13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2 in task-service has been cleanup successfully"
Feb 13 15:52:09.425788 containerd[1886]: time="2025-02-13T15:52:09.425549981Z" level=info msg="TearDown network for sandbox \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" successfully"
Feb 13 15:52:09.425788 containerd[1886]: time="2025-02-13T15:52:09.425570962Z" level=info msg="StopPodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" returns successfully"
Feb 13 15:52:09.430161 containerd[1886]: time="2025-02-13T15:52:09.429348204Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\""
Feb 13 15:52:09.430161 containerd[1886]: time="2025-02-13T15:52:09.429464631Z" level=info msg="TearDown network for sandbox \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" successfully"
Feb 13 15:52:09.430161 containerd[1886]: time="2025-02-13T15:52:09.429480616Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" returns successfully"
Feb 13 15:52:09.430880 containerd[1886]: time="2025-02-13T15:52:09.430846463Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\""
Feb 13 15:52:09.430982 containerd[1886]: time="2025-02-13T15:52:09.430954076Z" level=info msg="TearDown network for sandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" successfully"
Feb 13 15:52:09.431025 containerd[1886]: time="2025-02-13T15:52:09.430978569Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" returns successfully"
Feb 13 15:52:09.431191 systemd[1]: run-netns-cni\x2db1052b52\x2d7108\x2d5288\x2d2eec\x2d49960e2492be.mount: Deactivated successfully.
Feb 13 15:52:09.433009 containerd[1886]: time="2025-02-13T15:52:09.432017370Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\""
Feb 13 15:52:09.433009 containerd[1886]: time="2025-02-13T15:52:09.432122010Z" level=info msg="TearDown network for sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" successfully"
Feb 13 15:52:09.433009 containerd[1886]: time="2025-02-13T15:52:09.432138906Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" returns successfully"
Feb 13 15:52:09.434696 containerd[1886]: time="2025-02-13T15:52:09.434229504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:4,}"
Feb 13 15:52:09.685395 containerd[1886]: time="2025-02-13T15:52:09.685180636Z" level=error msg="Failed to destroy network for sandbox \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:09.687519 containerd[1886]: time="2025-02-13T15:52:09.687237186Z" level=error msg="encountered an error cleaning up failed sandbox \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:09.687519 containerd[1886]: time="2025-02-13T15:52:09.687329341Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:09.688506 kubelet[2377]: E0213 15:52:09.688457    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:09.688601 kubelet[2377]: E0213 15:52:09.688573    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:09.688654 kubelet[2377]: E0213 15:52:09.688606    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:09.689321 kubelet[2377]: E0213 15:52:09.689069    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-2dzlr" podUID="d5b6846b-905c-4b4d-ab65-740bbfe8f5eb"
Feb 13 15:52:09.712100 containerd[1886]: time="2025-02-13T15:52:09.711952630Z" level=error msg="Failed to destroy network for sandbox \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:09.712610 containerd[1886]: time="2025-02-13T15:52:09.712475464Z" level=error msg="encountered an error cleaning up failed sandbox \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:09.712812 containerd[1886]: time="2025-02-13T15:52:09.712783806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:09.714534 kubelet[2377]: E0213 15:52:09.713746    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:09.714534 kubelet[2377]: E0213 15:52:09.713817    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:09.714534 kubelet[2377]: E0213 15:52:09.713850    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:09.714774 kubelet[2377]: E0213 15:52:09.713920    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hswh5" podUID="c021f669-a6e0-4344-be54-42ff0a3b9776"
Feb 13 15:52:10.053217 kubelet[2377]: E0213 15:52:10.053104    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:10.256839 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239-shm.mount: Deactivated successfully.
Feb 13 15:52:10.257048 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2-shm.mount: Deactivated successfully.
Feb 13 15:52:10.431855 containerd[1886]: time="2025-02-13T15:52:10.431682460Z" level=info msg="StopPodSandbox for \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\""
Feb 13 15:52:10.432266 kubelet[2377]: I0213 15:52:10.432032    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2"
Feb 13 15:52:10.432836 containerd[1886]: time="2025-02-13T15:52:10.432799414Z" level=info msg="Ensure that sandbox 6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2 in task-service has been cleanup successfully"
Feb 13 15:52:10.435625 systemd[1]: run-netns-cni\x2d9b193197\x2da40e\x2d1616\x2d4314\x2dab02edcdf322.mount: Deactivated successfully.
Feb 13 15:52:10.438427 containerd[1886]: time="2025-02-13T15:52:10.436496144Z" level=info msg="TearDown network for sandbox \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\" successfully"
Feb 13 15:52:10.438427 containerd[1886]: time="2025-02-13T15:52:10.436536464Z" level=info msg="StopPodSandbox for \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\" returns successfully"
Feb 13 15:52:10.439451 containerd[1886]: time="2025-02-13T15:52:10.439411165Z" level=info msg="StopPodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\""
Feb 13 15:52:10.439549 containerd[1886]: time="2025-02-13T15:52:10.439517154Z" level=info msg="TearDown network for sandbox \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" successfully"
Feb 13 15:52:10.439549 containerd[1886]: time="2025-02-13T15:52:10.439533594Z" level=info msg="StopPodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" returns successfully"
Feb 13 15:52:10.442248 containerd[1886]: time="2025-02-13T15:52:10.442210176Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\""
Feb 13 15:52:10.442349 containerd[1886]: time="2025-02-13T15:52:10.442313416Z" level=info msg="TearDown network for sandbox \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" successfully"
Feb 13 15:52:10.442349 containerd[1886]: time="2025-02-13T15:52:10.442330760Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" returns successfully"
Feb 13 15:52:10.443012 containerd[1886]: time="2025-02-13T15:52:10.442979950Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\""
Feb 13 15:52:10.443100 containerd[1886]: time="2025-02-13T15:52:10.443072114Z" level=info msg="TearDown network for sandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" successfully"
Feb 13 15:52:10.443100 containerd[1886]: time="2025-02-13T15:52:10.443088103Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" returns successfully"
Feb 13 15:52:10.445991 containerd[1886]: time="2025-02-13T15:52:10.445956937Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\""
Feb 13 15:52:10.446091 containerd[1886]: time="2025-02-13T15:52:10.446056204Z" level=info msg="TearDown network for sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" successfully"
Feb 13 15:52:10.446091 containerd[1886]: time="2025-02-13T15:52:10.446071856Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" returns successfully"
Feb 13 15:52:10.446592 containerd[1886]: time="2025-02-13T15:52:10.446558518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:5,}"
Feb 13 15:52:10.447571 kubelet[2377]: I0213 15:52:10.447512    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239"
Feb 13 15:52:10.451562 containerd[1886]: time="2025-02-13T15:52:10.451517018Z" level=info msg="StopPodSandbox for \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\""
Feb 13 15:52:10.451817 containerd[1886]: time="2025-02-13T15:52:10.451789693Z" level=info msg="Ensure that sandbox 5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239 in task-service has been cleanup successfully"
Feb 13 15:52:10.453244 containerd[1886]: time="2025-02-13T15:52:10.452029544Z" level=info msg="TearDown network for sandbox \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\" successfully"
Feb 13 15:52:10.453244 containerd[1886]: time="2025-02-13T15:52:10.452051530Z" level=info msg="StopPodSandbox for \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\" returns successfully"
Feb 13 15:52:10.455924 containerd[1886]: time="2025-02-13T15:52:10.454856979Z" level=info msg="StopPodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\""
Feb 13 15:52:10.455924 containerd[1886]: time="2025-02-13T15:52:10.454970554Z" level=info msg="TearDown network for sandbox \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" successfully"
Feb 13 15:52:10.455924 containerd[1886]: time="2025-02-13T15:52:10.454987520Z" level=info msg="StopPodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" returns successfully"
Feb 13 15:52:10.455702 systemd[1]: run-netns-cni\x2d8aa260e7\x2d5135\x2de57d\x2d281a\x2dc23b58224519.mount: Deactivated successfully.
Feb 13 15:52:10.466252 containerd[1886]: time="2025-02-13T15:52:10.465960416Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\""
Feb 13 15:52:10.466252 containerd[1886]: time="2025-02-13T15:52:10.466110950Z" level=info msg="TearDown network for sandbox \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" successfully"
Feb 13 15:52:10.466252 containerd[1886]: time="2025-02-13T15:52:10.466128302Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" returns successfully"
Feb 13 15:52:10.467863 containerd[1886]: time="2025-02-13T15:52:10.467528406Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\""
Feb 13 15:52:10.467863 containerd[1886]: time="2025-02-13T15:52:10.467622591Z" level=info msg="TearDown network for sandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" successfully"
Feb 13 15:52:10.467863 containerd[1886]: time="2025-02-13T15:52:10.467633331Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" returns successfully"
Feb 13 15:52:10.476686 containerd[1886]: time="2025-02-13T15:52:10.469253674Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\""
Feb 13 15:52:10.476686 containerd[1886]: time="2025-02-13T15:52:10.469408313Z" level=info msg="TearDown network for sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" successfully"
Feb 13 15:52:10.476686 containerd[1886]: time="2025-02-13T15:52:10.469424149Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" returns successfully"
Feb 13 15:52:10.479009 containerd[1886]: time="2025-02-13T15:52:10.478964741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:5,}"
Feb 13 15:52:10.631701 containerd[1886]: time="2025-02-13T15:52:10.631643703Z" level=error msg="Failed to destroy network for sandbox \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:10.634281 containerd[1886]: time="2025-02-13T15:52:10.634128754Z" level=error msg="encountered an error cleaning up failed sandbox \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:10.634654 containerd[1886]: time="2025-02-13T15:52:10.634422707Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:10.635801 kubelet[2377]: E0213 15:52:10.635759    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:10.635894 kubelet[2377]: E0213 15:52:10.635845    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:10.635894 kubelet[2377]: E0213 15:52:10.635880    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:10.636270 kubelet[2377]: E0213 15:52:10.636138    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-2dzlr" podUID="d5b6846b-905c-4b4d-ab65-740bbfe8f5eb"
Feb 13 15:52:10.690403 containerd[1886]: time="2025-02-13T15:52:10.690268836Z" level=error msg="Failed to destroy network for sandbox \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:10.692448 containerd[1886]: time="2025-02-13T15:52:10.692191068Z" level=error msg="encountered an error cleaning up failed sandbox \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:10.692448 containerd[1886]: time="2025-02-13T15:52:10.692308105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:10.693497 kubelet[2377]: E0213 15:52:10.693005    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:10.693497 kubelet[2377]: E0213 15:52:10.693070    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:10.693497 kubelet[2377]: E0213 15:52:10.693191    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:10.693676 kubelet[2377]: E0213 15:52:10.693290    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hswh5" podUID="c021f669-a6e0-4344-be54-42ff0a3b9776"
Feb 13 15:52:11.053867 kubelet[2377]: E0213 15:52:11.053831    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:11.256337 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed-shm.mount: Deactivated successfully.
Feb 13 15:52:11.256473 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74-shm.mount: Deactivated successfully.
Feb 13 15:52:11.454791 kubelet[2377]: I0213 15:52:11.454668    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74"
Feb 13 15:52:11.456118 containerd[1886]: time="2025-02-13T15:52:11.456083502Z" level=info msg="StopPodSandbox for \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\""
Feb 13 15:52:11.456785 containerd[1886]: time="2025-02-13T15:52:11.456424717Z" level=info msg="Ensure that sandbox 51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74 in task-service has been cleanup successfully"
Feb 13 15:52:11.460042 containerd[1886]: time="2025-02-13T15:52:11.459585113Z" level=info msg="TearDown network for sandbox \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\" successfully"
Feb 13 15:52:11.460042 containerd[1886]: time="2025-02-13T15:52:11.459638120Z" level=info msg="StopPodSandbox for \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\" returns successfully"
Feb 13 15:52:11.462418 containerd[1886]: time="2025-02-13T15:52:11.460876847Z" level=info msg="StopPodSandbox for \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\""
Feb 13 15:52:11.462418 containerd[1886]: time="2025-02-13T15:52:11.462091168Z" level=info msg="TearDown network for sandbox \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\" successfully"
Feb 13 15:52:11.462418 containerd[1886]: time="2025-02-13T15:52:11.462115739Z" level=info msg="StopPodSandbox for \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\" returns successfully"
Feb 13 15:52:11.461580 systemd[1]: run-netns-cni\x2d8432d1d2\x2d21b4\x2db77a\x2dafe0\x2df3c0d878e72d.mount: Deactivated successfully.
Feb 13 15:52:11.465547 containerd[1886]: time="2025-02-13T15:52:11.465466030Z" level=info msg="StopPodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\""
Feb 13 15:52:11.465878 containerd[1886]: time="2025-02-13T15:52:11.465818450Z" level=info msg="TearDown network for sandbox \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" successfully"
Feb 13 15:52:11.465878 containerd[1886]: time="2025-02-13T15:52:11.465839692Z" level=info msg="StopPodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" returns successfully"
Feb 13 15:52:11.466970 containerd[1886]: time="2025-02-13T15:52:11.466940888Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\""
Feb 13 15:52:11.467162 containerd[1886]: time="2025-02-13T15:52:11.467045829Z" level=info msg="TearDown network for sandbox \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" successfully"
Feb 13 15:52:11.467162 containerd[1886]: time="2025-02-13T15:52:11.467060737Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" returns successfully"
Feb 13 15:52:11.468067 containerd[1886]: time="2025-02-13T15:52:11.467902253Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\""
Feb 13 15:52:11.468067 containerd[1886]: time="2025-02-13T15:52:11.468003078Z" level=info msg="TearDown network for sandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" successfully"
Feb 13 15:52:11.468067 containerd[1886]: time="2025-02-13T15:52:11.468017927Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" returns successfully"
Feb 13 15:52:11.469234 containerd[1886]: time="2025-02-13T15:52:11.469212959Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\""
Feb 13 15:52:11.469840 containerd[1886]: time="2025-02-13T15:52:11.469481734Z" level=info msg="TearDown network for sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" successfully"
Feb 13 15:52:11.469840 containerd[1886]: time="2025-02-13T15:52:11.469500229Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" returns successfully"
Feb 13 15:52:11.470599 containerd[1886]: time="2025-02-13T15:52:11.470560418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:6,}"
Feb 13 15:52:11.472834 kubelet[2377]: I0213 15:52:11.471698    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed"
Feb 13 15:52:11.472950 containerd[1886]: time="2025-02-13T15:52:11.472928405Z" level=info msg="StopPodSandbox for \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\""
Feb 13 15:52:11.474978 containerd[1886]: time="2025-02-13T15:52:11.474945043Z" level=info msg="Ensure that sandbox b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed in task-service has been cleanup successfully"
Feb 13 15:52:11.475181 containerd[1886]: time="2025-02-13T15:52:11.475152804Z" level=info msg="TearDown network for sandbox \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\" successfully"
Feb 13 15:52:11.475609 containerd[1886]: time="2025-02-13T15:52:11.475177781Z" level=info msg="StopPodSandbox for \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\" returns successfully"
Feb 13 15:52:11.476467 containerd[1886]: time="2025-02-13T15:52:11.475884885Z" level=info msg="StopPodSandbox for \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\""
Feb 13 15:52:11.476467 containerd[1886]: time="2025-02-13T15:52:11.476069050Z" level=info msg="TearDown network for sandbox \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\" successfully"
Feb 13 15:52:11.476467 containerd[1886]: time="2025-02-13T15:52:11.476086759Z" level=info msg="StopPodSandbox for \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\" returns successfully"
Feb 13 15:52:11.479147 containerd[1886]: time="2025-02-13T15:52:11.476881656Z" level=info msg="StopPodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\""
Feb 13 15:52:11.479147 containerd[1886]: time="2025-02-13T15:52:11.476981354Z" level=info msg="TearDown network for sandbox \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" successfully"
Feb 13 15:52:11.479147 containerd[1886]: time="2025-02-13T15:52:11.476997095Z" level=info msg="StopPodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" returns successfully"
Feb 13 15:52:11.479147 containerd[1886]: time="2025-02-13T15:52:11.477281057Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\""
Feb 13 15:52:11.479147 containerd[1886]: time="2025-02-13T15:52:11.477375357Z" level=info msg="TearDown network for sandbox \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" successfully"
Feb 13 15:52:11.479147 containerd[1886]: time="2025-02-13T15:52:11.477388631Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" returns successfully"
Feb 13 15:52:11.479147 containerd[1886]: time="2025-02-13T15:52:11.477705833Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\""
Feb 13 15:52:11.479147 containerd[1886]: time="2025-02-13T15:52:11.478013767Z" level=info msg="TearDown network for sandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" successfully"
Feb 13 15:52:11.479147 containerd[1886]: time="2025-02-13T15:52:11.478030909Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" returns successfully"
Feb 13 15:52:11.479147 containerd[1886]: time="2025-02-13T15:52:11.478310741Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\""
Feb 13 15:52:11.479147 containerd[1886]: time="2025-02-13T15:52:11.478529033Z" level=info msg="TearDown network for sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" successfully"
Feb 13 15:52:11.479147 containerd[1886]: time="2025-02-13T15:52:11.478548653Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" returns successfully"
Feb 13 15:52:11.481006 containerd[1886]: time="2025-02-13T15:52:11.479341216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:6,}"
Feb 13 15:52:11.481974 systemd[1]: run-netns-cni\x2da6a09894\x2db052\x2daa79\x2d3aab\x2dbcdba0933fd7.mount: Deactivated successfully.
Feb 13 15:52:11.871297 containerd[1886]: time="2025-02-13T15:52:11.871156312Z" level=error msg="Failed to destroy network for sandbox \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:11.871966 containerd[1886]: time="2025-02-13T15:52:11.871750262Z" level=error msg="encountered an error cleaning up failed sandbox \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:11.871966 containerd[1886]: time="2025-02-13T15:52:11.871832937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:11.872296 kubelet[2377]: E0213 15:52:11.872271    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:11.872378 kubelet[2377]: E0213 15:52:11.872338    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:11.872378 kubelet[2377]: E0213 15:52:11.872371    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:11.872469 kubelet[2377]: E0213 15:52:11.872439    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hswh5" podUID="c021f669-a6e0-4344-be54-42ff0a3b9776"
Feb 13 15:52:11.895616 containerd[1886]: time="2025-02-13T15:52:11.895561581Z" level=error msg="Failed to destroy network for sandbox \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:11.897057 containerd[1886]: time="2025-02-13T15:52:11.897005801Z" level=error msg="encountered an error cleaning up failed sandbox \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:11.897184 containerd[1886]: time="2025-02-13T15:52:11.897097931Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:11.898296 kubelet[2377]: E0213 15:52:11.898047    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:11.898296 kubelet[2377]: E0213 15:52:11.898139    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:11.898296 kubelet[2377]: E0213 15:52:11.898189    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:11.898937 kubelet[2377]: E0213 15:52:11.898755    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-2dzlr" podUID="d5b6846b-905c-4b4d-ab65-740bbfe8f5eb"
Feb 13 15:52:12.057330 kubelet[2377]: E0213 15:52:12.057243    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:12.260565 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04-shm.mount: Deactivated successfully.
Feb 13 15:52:12.482904 kubelet[2377]: I0213 15:52:12.482580    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2"
Feb 13 15:52:12.484614 containerd[1886]: time="2025-02-13T15:52:12.484048080Z" level=info msg="StopPodSandbox for \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\""
Feb 13 15:52:12.485059 containerd[1886]: time="2025-02-13T15:52:12.484991352Z" level=info msg="Ensure that sandbox c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2 in task-service has been cleanup successfully"
Feb 13 15:52:12.487823 containerd[1886]: time="2025-02-13T15:52:12.486214745Z" level=info msg="TearDown network for sandbox \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\" successfully"
Feb 13 15:52:12.487941 containerd[1886]: time="2025-02-13T15:52:12.487822880Z" level=info msg="StopPodSandbox for \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\" returns successfully"
Feb 13 15:52:12.490108 containerd[1886]: time="2025-02-13T15:52:12.489965001Z" level=info msg="StopPodSandbox for \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\""
Feb 13 15:52:12.490108 containerd[1886]: time="2025-02-13T15:52:12.490069072Z" level=info msg="TearDown network for sandbox \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\" successfully"
Feb 13 15:52:12.490108 containerd[1886]: time="2025-02-13T15:52:12.490082613Z" level=info msg="StopPodSandbox for \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\" returns successfully"
Feb 13 15:52:12.497603 systemd[1]: run-netns-cni\x2db7a96201\x2ddeba\x2da631\x2d7e5d\x2db6137912471f.mount: Deactivated successfully.
Feb 13 15:52:12.506818 containerd[1886]: time="2025-02-13T15:52:12.506659439Z" level=info msg="StopPodSandbox for \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\""
Feb 13 15:52:12.506818 containerd[1886]: time="2025-02-13T15:52:12.506807415Z" level=info msg="TearDown network for sandbox \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\" successfully"
Feb 13 15:52:12.506818 containerd[1886]: time="2025-02-13T15:52:12.506824063Z" level=info msg="StopPodSandbox for \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\" returns successfully"
Feb 13 15:52:12.518140 containerd[1886]: time="2025-02-13T15:52:12.517838153Z" level=info msg="StopPodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\""
Feb 13 15:52:12.518140 containerd[1886]: time="2025-02-13T15:52:12.517982259Z" level=info msg="TearDown network for sandbox \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" successfully"
Feb 13 15:52:12.518140 containerd[1886]: time="2025-02-13T15:52:12.517997690Z" level=info msg="StopPodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" returns successfully"
Feb 13 15:52:12.529151 kubelet[2377]: I0213 15:52:12.526679    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04"
Feb 13 15:52:12.529290 containerd[1886]: time="2025-02-13T15:52:12.529177087Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\""
Feb 13 15:52:12.530141 containerd[1886]: time="2025-02-13T15:52:12.529901668Z" level=info msg="TearDown network for sandbox \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" successfully"
Feb 13 15:52:12.530141 containerd[1886]: time="2025-02-13T15:52:12.529932604Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" returns successfully"
Feb 13 15:52:12.530319 containerd[1886]: time="2025-02-13T15:52:12.530223611Z" level=info msg="StopPodSandbox for \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\""
Feb 13 15:52:12.531071 containerd[1886]: time="2025-02-13T15:52:12.530748102Z" level=info msg="Ensure that sandbox 28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04 in task-service has been cleanup successfully"
Feb 13 15:52:12.532959 containerd[1886]: time="2025-02-13T15:52:12.532855193Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\""
Feb 13 15:52:12.533330 containerd[1886]: time="2025-02-13T15:52:12.533085927Z" level=info msg="TearDown network for sandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" successfully"
Feb 13 15:52:12.533330 containerd[1886]: time="2025-02-13T15:52:12.533106610Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" returns successfully"
Feb 13 15:52:12.536607 systemd[1]: run-netns-cni\x2d5a20bdb9\x2d2884\x2dcf1e\x2d6fa1\x2d927fba38b722.mount: Deactivated successfully.
Feb 13 15:52:12.537279 containerd[1886]: time="2025-02-13T15:52:12.536977893Z" level=info msg="TearDown network for sandbox \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\" successfully"
Feb 13 15:52:12.537279 containerd[1886]: time="2025-02-13T15:52:12.537018728Z" level=info msg="StopPodSandbox for \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\" returns successfully"
Feb 13 15:52:12.539335 containerd[1886]: time="2025-02-13T15:52:12.538585744Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\""
Feb 13 15:52:12.539335 containerd[1886]: time="2025-02-13T15:52:12.538774171Z" level=info msg="TearDown network for sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" successfully"
Feb 13 15:52:12.539335 containerd[1886]: time="2025-02-13T15:52:12.538792265Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" returns successfully"
Feb 13 15:52:12.539335 containerd[1886]: time="2025-02-13T15:52:12.539090834Z" level=info msg="StopPodSandbox for \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\""
Feb 13 15:52:12.539335 containerd[1886]: time="2025-02-13T15:52:12.539235799Z" level=info msg="TearDown network for sandbox \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\" successfully"
Feb 13 15:52:12.539335 containerd[1886]: time="2025-02-13T15:52:12.539246758Z" level=info msg="StopPodSandbox for \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\" returns successfully"
Feb 13 15:52:12.540276 containerd[1886]: time="2025-02-13T15:52:12.540240377Z" level=info msg="StopPodSandbox for \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\""
Feb 13 15:52:12.540390 containerd[1886]: time="2025-02-13T15:52:12.540341077Z" level=info msg="TearDown network for sandbox \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\" successfully"
Feb 13 15:52:12.540390 containerd[1886]: time="2025-02-13T15:52:12.540357404Z" level=info msg="StopPodSandbox for \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\" returns successfully"
Feb 13 15:52:12.541347 containerd[1886]: time="2025-02-13T15:52:12.541318051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:7,}"
Feb 13 15:52:12.542243 containerd[1886]: time="2025-02-13T15:52:12.542216881Z" level=info msg="StopPodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\""
Feb 13 15:52:12.542642 containerd[1886]: time="2025-02-13T15:52:12.542403560Z" level=info msg="TearDown network for sandbox \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" successfully"
Feb 13 15:52:12.542642 containerd[1886]: time="2025-02-13T15:52:12.542421867Z" level=info msg="StopPodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" returns successfully"
Feb 13 15:52:12.543287 containerd[1886]: time="2025-02-13T15:52:12.543256085Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\""
Feb 13 15:52:12.543480 containerd[1886]: time="2025-02-13T15:52:12.543448786Z" level=info msg="TearDown network for sandbox \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" successfully"
Feb 13 15:52:12.543539 containerd[1886]: time="2025-02-13T15:52:12.543475882Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" returns successfully"
Feb 13 15:52:12.546503 containerd[1886]: time="2025-02-13T15:52:12.546467569Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\""
Feb 13 15:52:12.546619 containerd[1886]: time="2025-02-13T15:52:12.546580329Z" level=info msg="TearDown network for sandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" successfully"
Feb 13 15:52:12.546619 containerd[1886]: time="2025-02-13T15:52:12.546597115Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" returns successfully"
Feb 13 15:52:12.548613 containerd[1886]: time="2025-02-13T15:52:12.548575587Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\""
Feb 13 15:52:12.548866 containerd[1886]: time="2025-02-13T15:52:12.548784721Z" level=info msg="TearDown network for sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" successfully"
Feb 13 15:52:12.548866 containerd[1886]: time="2025-02-13T15:52:12.548803332Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" returns successfully"
Feb 13 15:52:12.551340 containerd[1886]: time="2025-02-13T15:52:12.550930131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:7,}"
Feb 13 15:52:12.754590 containerd[1886]: time="2025-02-13T15:52:12.754535376Z" level=error msg="Failed to destroy network for sandbox \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:12.756105 containerd[1886]: time="2025-02-13T15:52:12.756057083Z" level=error msg="encountered an error cleaning up failed sandbox \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:12.756235 containerd[1886]: time="2025-02-13T15:52:12.756143788Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:7,} failed, error" error="failed to setup network for sandbox \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:12.756566 kubelet[2377]: E0213 15:52:12.756395    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:12.756566 kubelet[2377]: E0213 15:52:12.756470    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:12.756566 kubelet[2377]: E0213 15:52:12.756507    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:12.757448 kubelet[2377]: E0213 15:52:12.757406    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-2dzlr" podUID="d5b6846b-905c-4b4d-ab65-740bbfe8f5eb"
Feb 13 15:52:12.777464 containerd[1886]: time="2025-02-13T15:52:12.777343456Z" level=error msg="Failed to destroy network for sandbox \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:12.777988 containerd[1886]: time="2025-02-13T15:52:12.777944710Z" level=error msg="encountered an error cleaning up failed sandbox \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:12.778074 containerd[1886]: time="2025-02-13T15:52:12.778018126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:12.778413 kubelet[2377]: E0213 15:52:12.778286    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:12.778413 kubelet[2377]: E0213 15:52:12.778351    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:12.778413 kubelet[2377]: E0213 15:52:12.778382    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:12.779105 kubelet[2377]: E0213 15:52:12.778956    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hswh5" podUID="c021f669-a6e0-4344-be54-42ff0a3b9776"
Feb 13 15:52:13.059599 kubelet[2377]: E0213 15:52:13.059468    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:13.263513 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63-shm.mount: Deactivated successfully.
Feb 13 15:52:13.536474 kubelet[2377]: I0213 15:52:13.535448    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63"
Feb 13 15:52:13.536660 containerd[1886]: time="2025-02-13T15:52:13.536478726Z" level=info msg="StopPodSandbox for \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\""
Feb 13 15:52:13.537229 containerd[1886]: time="2025-02-13T15:52:13.536857786Z" level=info msg="Ensure that sandbox def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63 in task-service has been cleanup successfully"
Feb 13 15:52:13.537229 containerd[1886]: time="2025-02-13T15:52:13.537106969Z" level=info msg="TearDown network for sandbox \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\" successfully"
Feb 13 15:52:13.537229 containerd[1886]: time="2025-02-13T15:52:13.537127080Z" level=info msg="StopPodSandbox for \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\" returns successfully"
Feb 13 15:52:13.540992 containerd[1886]: time="2025-02-13T15:52:13.540140323Z" level=info msg="StopPodSandbox for \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\""
Feb 13 15:52:13.540992 containerd[1886]: time="2025-02-13T15:52:13.540243044Z" level=info msg="TearDown network for sandbox \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\" successfully"
Feb 13 15:52:13.540992 containerd[1886]: time="2025-02-13T15:52:13.540258551Z" level=info msg="StopPodSandbox for \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\" returns successfully"
Feb 13 15:52:13.541064 systemd[1]: run-netns-cni\x2dfde1281e\x2db780\x2d2e36\x2dafaa\x2dee4025f53fbb.mount: Deactivated successfully.
Feb 13 15:52:13.542897 containerd[1886]: time="2025-02-13T15:52:13.542165331Z" level=info msg="StopPodSandbox for \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\""
Feb 13 15:52:13.542897 containerd[1886]: time="2025-02-13T15:52:13.542264456Z" level=info msg="TearDown network for sandbox \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\" successfully"
Feb 13 15:52:13.542897 containerd[1886]: time="2025-02-13T15:52:13.542277635Z" level=info msg="StopPodSandbox for \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\" returns successfully"
Feb 13 15:52:13.544510 containerd[1886]: time="2025-02-13T15:52:13.544067567Z" level=info msg="StopPodSandbox for \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\""
Feb 13 15:52:13.544510 containerd[1886]: time="2025-02-13T15:52:13.544159650Z" level=info msg="TearDown network for sandbox \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\" successfully"
Feb 13 15:52:13.544510 containerd[1886]: time="2025-02-13T15:52:13.544170075Z" level=info msg="StopPodSandbox for \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\" returns successfully"
Feb 13 15:52:13.544996 containerd[1886]: time="2025-02-13T15:52:13.544970721Z" level=info msg="StopPodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\""
Feb 13 15:52:13.545072 containerd[1886]: time="2025-02-13T15:52:13.545058183Z" level=info msg="TearDown network for sandbox \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" successfully"
Feb 13 15:52:13.545127 containerd[1886]: time="2025-02-13T15:52:13.545072892Z" level=info msg="StopPodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" returns successfully"
Feb 13 15:52:13.546178 containerd[1886]: time="2025-02-13T15:52:13.545892612Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\""
Feb 13 15:52:13.546178 containerd[1886]: time="2025-02-13T15:52:13.545983403Z" level=info msg="TearDown network for sandbox \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" successfully"
Feb 13 15:52:13.546178 containerd[1886]: time="2025-02-13T15:52:13.545997523Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" returns successfully"
Feb 13 15:52:13.546855 containerd[1886]: time="2025-02-13T15:52:13.546827768Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\""
Feb 13 15:52:13.546933 containerd[1886]: time="2025-02-13T15:52:13.546918133Z" level=info msg="TearDown network for sandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" successfully"
Feb 13 15:52:13.548067 containerd[1886]: time="2025-02-13T15:52:13.546933179Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" returns successfully"
Feb 13 15:52:13.548067 containerd[1886]: time="2025-02-13T15:52:13.547308097Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\""
Feb 13 15:52:13.548067 containerd[1886]: time="2025-02-13T15:52:13.547393757Z" level=info msg="TearDown network for sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" successfully"
Feb 13 15:52:13.548067 containerd[1886]: time="2025-02-13T15:52:13.547408238Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" returns successfully"
Feb 13 15:52:13.548558 containerd[1886]: time="2025-02-13T15:52:13.548530492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:8,}"
Feb 13 15:52:13.549380 kubelet[2377]: I0213 15:52:13.549357    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92"
Feb 13 15:52:13.550660 containerd[1886]: time="2025-02-13T15:52:13.550635886Z" level=info msg="StopPodSandbox for \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\""
Feb 13 15:52:13.551033 containerd[1886]: time="2025-02-13T15:52:13.551008663Z" level=info msg="Ensure that sandbox 0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92 in task-service has been cleanup successfully"
Feb 13 15:52:13.551304 containerd[1886]: time="2025-02-13T15:52:13.551281614Z" level=info msg="TearDown network for sandbox \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\" successfully"
Feb 13 15:52:13.551781 containerd[1886]: time="2025-02-13T15:52:13.551758043Z" level=info msg="StopPodSandbox for \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\" returns successfully"
Feb 13 15:52:13.554893 systemd[1]: run-netns-cni\x2d0ea22be9\x2d61e4\x2db299\x2d8585\x2d401e3c47bbe1.mount: Deactivated successfully.
Feb 13 15:52:13.556235 containerd[1886]: time="2025-02-13T15:52:13.556201927Z" level=info msg="StopPodSandbox for \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\""
Feb 13 15:52:13.556678 containerd[1886]: time="2025-02-13T15:52:13.556653951Z" level=info msg="TearDown network for sandbox \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\" successfully"
Feb 13 15:52:13.556892 containerd[1886]: time="2025-02-13T15:52:13.556806353Z" level=info msg="StopPodSandbox for \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\" returns successfully"
Feb 13 15:52:13.565807 containerd[1886]: time="2025-02-13T15:52:13.565680599Z" level=info msg="StopPodSandbox for \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\""
Feb 13 15:52:13.566568 containerd[1886]: time="2025-02-13T15:52:13.566280401Z" level=info msg="TearDown network for sandbox \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\" successfully"
Feb 13 15:52:13.566568 containerd[1886]: time="2025-02-13T15:52:13.566302340Z" level=info msg="StopPodSandbox for \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\" returns successfully"
Feb 13 15:52:13.566955 containerd[1886]: time="2025-02-13T15:52:13.566820658Z" level=info msg="StopPodSandbox for \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\""
Feb 13 15:52:13.567123 containerd[1886]: time="2025-02-13T15:52:13.567108195Z" level=info msg="TearDown network for sandbox \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\" successfully"
Feb 13 15:52:13.567420 containerd[1886]: time="2025-02-13T15:52:13.567292028Z" level=info msg="StopPodSandbox for \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\" returns successfully"
Feb 13 15:52:13.569597 containerd[1886]: time="2025-02-13T15:52:13.569570559Z" level=info msg="StopPodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\""
Feb 13 15:52:13.570855 containerd[1886]: time="2025-02-13T15:52:13.570828586Z" level=info msg="TearDown network for sandbox \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" successfully"
Feb 13 15:52:13.571042 containerd[1886]: time="2025-02-13T15:52:13.570955709Z" level=info msg="StopPodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" returns successfully"
Feb 13 15:52:13.571812 containerd[1886]: time="2025-02-13T15:52:13.571620053Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\""
Feb 13 15:52:13.572187 containerd[1886]: time="2025-02-13T15:52:13.572059019Z" level=info msg="TearDown network for sandbox \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" successfully"
Feb 13 15:52:13.572187 containerd[1886]: time="2025-02-13T15:52:13.572080051Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" returns successfully"
Feb 13 15:52:13.573783 containerd[1886]: time="2025-02-13T15:52:13.572902640Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\""
Feb 13 15:52:13.573783 containerd[1886]: time="2025-02-13T15:52:13.572999852Z" level=info msg="TearDown network for sandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" successfully"
Feb 13 15:52:13.573783 containerd[1886]: time="2025-02-13T15:52:13.573059365Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" returns successfully"
Feb 13 15:52:13.587838 containerd[1886]: time="2025-02-13T15:52:13.587798937Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\""
Feb 13 15:52:13.589234 containerd[1886]: time="2025-02-13T15:52:13.589064793Z" level=info msg="TearDown network for sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" successfully"
Feb 13 15:52:13.589234 containerd[1886]: time="2025-02-13T15:52:13.589265384Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" returns successfully"
Feb 13 15:52:13.593909 containerd[1886]: time="2025-02-13T15:52:13.593377744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:8,}"
Feb 13 15:52:13.808373 containerd[1886]: time="2025-02-13T15:52:13.805967555Z" level=error msg="Failed to destroy network for sandbox \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:13.808373 containerd[1886]: time="2025-02-13T15:52:13.807079254Z" level=error msg="encountered an error cleaning up failed sandbox \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:13.808373 containerd[1886]: time="2025-02-13T15:52:13.807226922Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:8,} failed, error" error="failed to setup network for sandbox \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:13.808704 kubelet[2377]: E0213 15:52:13.807597    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:13.808704 kubelet[2377]: E0213 15:52:13.807674    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:13.808704 kubelet[2377]: E0213 15:52:13.807775    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:13.809708 kubelet[2377]: E0213 15:52:13.809096    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-2dzlr" podUID="d5b6846b-905c-4b4d-ab65-740bbfe8f5eb"
Feb 13 15:52:13.890191 containerd[1886]: time="2025-02-13T15:52:13.889914373Z" level=error msg="Failed to destroy network for sandbox \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:13.898293 containerd[1886]: time="2025-02-13T15:52:13.896551565Z" level=error msg="encountered an error cleaning up failed sandbox \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:13.898293 containerd[1886]: time="2025-02-13T15:52:13.897832299Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:13.898482 kubelet[2377]: E0213 15:52:13.898095    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:13.898482 kubelet[2377]: E0213 15:52:13.898156    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:13.898482 kubelet[2377]: E0213 15:52:13.898186    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:13.898643 kubelet[2377]: E0213 15:52:13.898253    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hswh5" podUID="c021f669-a6e0-4344-be54-42ff0a3b9776"
Feb 13 15:52:14.060912 kubelet[2377]: E0213 15:52:14.059991    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:14.257237 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119-shm.mount: Deactivated successfully.
Feb 13 15:52:14.590930 kubelet[2377]: I0213 15:52:14.590859    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1"
Feb 13 15:52:14.592767 containerd[1886]: time="2025-02-13T15:52:14.592291276Z" level=info msg="StopPodSandbox for \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\""
Feb 13 15:52:14.592767 containerd[1886]: time="2025-02-13T15:52:14.592606184Z" level=info msg="Ensure that sandbox ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1 in task-service has been cleanup successfully"
Feb 13 15:52:14.594939 containerd[1886]: time="2025-02-13T15:52:14.594818614Z" level=info msg="TearDown network for sandbox \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\" successfully"
Feb 13 15:52:14.594939 containerd[1886]: time="2025-02-13T15:52:14.594858184Z" level=info msg="StopPodSandbox for \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\" returns successfully"
Feb 13 15:52:14.597928 containerd[1886]: time="2025-02-13T15:52:14.597334062Z" level=info msg="StopPodSandbox for \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\""
Feb 13 15:52:14.597928 containerd[1886]: time="2025-02-13T15:52:14.597436607Z" level=info msg="TearDown network for sandbox \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\" successfully"
Feb 13 15:52:14.597928 containerd[1886]: time="2025-02-13T15:52:14.597452150Z" level=info msg="StopPodSandbox for \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\" returns successfully"
Feb 13 15:52:14.599540 containerd[1886]: time="2025-02-13T15:52:14.599469733Z" level=info msg="StopPodSandbox for \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\""
Feb 13 15:52:14.600107 systemd[1]: run-netns-cni\x2d8e71c059\x2d305f\x2db267\x2d7540\x2d21106f4c777a.mount: Deactivated successfully.
Feb 13 15:52:14.601440 containerd[1886]: time="2025-02-13T15:52:14.601310113Z" level=info msg="TearDown network for sandbox \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\" successfully"
Feb 13 15:52:14.601440 containerd[1886]: time="2025-02-13T15:52:14.601335959Z" level=info msg="StopPodSandbox for \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\" returns successfully"
Feb 13 15:52:14.604625 containerd[1886]: time="2025-02-13T15:52:14.604053816Z" level=info msg="StopPodSandbox for \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\""
Feb 13 15:52:14.604625 containerd[1886]: time="2025-02-13T15:52:14.604157671Z" level=info msg="TearDown network for sandbox \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\" successfully"
Feb 13 15:52:14.604625 containerd[1886]: time="2025-02-13T15:52:14.604173328Z" level=info msg="StopPodSandbox for \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\" returns successfully"
Feb 13 15:52:14.605161 containerd[1886]: time="2025-02-13T15:52:14.605138125Z" level=info msg="StopPodSandbox for \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\""
Feb 13 15:52:14.605466 containerd[1886]: time="2025-02-13T15:52:14.605360916Z" level=info msg="TearDown network for sandbox \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\" successfully"
Feb 13 15:52:14.605621 containerd[1886]: time="2025-02-13T15:52:14.605600407Z" level=info msg="StopPodSandbox for \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\" returns successfully"
Feb 13 15:52:14.607811 containerd[1886]: time="2025-02-13T15:52:14.607071266Z" level=info msg="StopPodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\""
Feb 13 15:52:14.607988 containerd[1886]: time="2025-02-13T15:52:14.607235039Z" level=info msg="TearDown network for sandbox \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" successfully"
Feb 13 15:52:14.607988 containerd[1886]: time="2025-02-13T15:52:14.607965575Z" level=info msg="StopPodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" returns successfully"
Feb 13 15:52:14.608421 containerd[1886]: time="2025-02-13T15:52:14.608336298Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\""
Feb 13 15:52:14.608492 containerd[1886]: time="2025-02-13T15:52:14.608430179Z" level=info msg="TearDown network for sandbox \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" successfully"
Feb 13 15:52:14.608492 containerd[1886]: time="2025-02-13T15:52:14.608444372Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" returns successfully"
Feb 13 15:52:14.609252 containerd[1886]: time="2025-02-13T15:52:14.609167643Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\""
Feb 13 15:52:14.609327 containerd[1886]: time="2025-02-13T15:52:14.609259639Z" level=info msg="TearDown network for sandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" successfully"
Feb 13 15:52:14.609327 containerd[1886]: time="2025-02-13T15:52:14.609273851Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" returns successfully"
Feb 13 15:52:14.610327 containerd[1886]: time="2025-02-13T15:52:14.610166691Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\""
Feb 13 15:52:14.610482 containerd[1886]: time="2025-02-13T15:52:14.610441800Z" level=info msg="TearDown network for sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" successfully"
Feb 13 15:52:14.610482 containerd[1886]: time="2025-02-13T15:52:14.610462100Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" returns successfully"
Feb 13 15:52:14.612495 containerd[1886]: time="2025-02-13T15:52:14.612414608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:9,}"
Feb 13 15:52:14.621607 kubelet[2377]: I0213 15:52:14.620752    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119"
Feb 13 15:52:14.622976 containerd[1886]: time="2025-02-13T15:52:14.621674227Z" level=info msg="StopPodSandbox for \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\""
Feb 13 15:52:14.623338 containerd[1886]: time="2025-02-13T15:52:14.623306418Z" level=info msg="Ensure that sandbox 412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119 in task-service has been cleanup successfully"
Feb 13 15:52:14.630659 containerd[1886]: time="2025-02-13T15:52:14.630608010Z" level=info msg="TearDown network for sandbox \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\" successfully"
Feb 13 15:52:14.630659 containerd[1886]: time="2025-02-13T15:52:14.630651038Z" level=info msg="StopPodSandbox for \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\" returns successfully"
Feb 13 15:52:14.632064 containerd[1886]: time="2025-02-13T15:52:14.631495244Z" level=info msg="StopPodSandbox for \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\""
Feb 13 15:52:14.632064 containerd[1886]: time="2025-02-13T15:52:14.631799085Z" level=info msg="TearDown network for sandbox \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\" successfully"
Feb 13 15:52:14.632064 containerd[1886]: time="2025-02-13T15:52:14.631820461Z" level=info msg="StopPodSandbox for \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\" returns successfully"
Feb 13 15:52:14.632384 systemd[1]: run-netns-cni\x2da117636a\x2db776\x2deac7\x2d9da7\x2d0b1044cac4c8.mount: Deactivated successfully.
Feb 13 15:52:14.636985 containerd[1886]: time="2025-02-13T15:52:14.636748555Z" level=info msg="StopPodSandbox for \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\""
Feb 13 15:52:14.636985 containerd[1886]: time="2025-02-13T15:52:14.636876779Z" level=info msg="TearDown network for sandbox \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\" successfully"
Feb 13 15:52:14.636985 containerd[1886]: time="2025-02-13T15:52:14.636894951Z" level=info msg="StopPodSandbox for \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\" returns successfully"
Feb 13 15:52:14.640092 containerd[1886]: time="2025-02-13T15:52:14.640054712Z" level=info msg="StopPodSandbox for \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\""
Feb 13 15:52:14.640215 containerd[1886]: time="2025-02-13T15:52:14.640170407Z" level=info msg="TearDown network for sandbox \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\" successfully"
Feb 13 15:52:14.640215 containerd[1886]: time="2025-02-13T15:52:14.640187364Z" level=info msg="StopPodSandbox for \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\" returns successfully"
Feb 13 15:52:14.640636 containerd[1886]: time="2025-02-13T15:52:14.640608759Z" level=info msg="StopPodSandbox for \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\""
Feb 13 15:52:14.640752 containerd[1886]: time="2025-02-13T15:52:14.640700519Z" level=info msg="TearDown network for sandbox \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\" successfully"
Feb 13 15:52:14.640752 containerd[1886]: time="2025-02-13T15:52:14.640717523Z" level=info msg="StopPodSandbox for \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\" returns successfully"
Feb 13 15:52:14.641849 containerd[1886]: time="2025-02-13T15:52:14.641666771Z" level=info msg="StopPodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\""
Feb 13 15:52:14.642170 containerd[1886]: time="2025-02-13T15:52:14.641877621Z" level=info msg="TearDown network for sandbox \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" successfully"
Feb 13 15:52:14.642170 containerd[1886]: time="2025-02-13T15:52:14.641896606Z" level=info msg="StopPodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" returns successfully"
Feb 13 15:52:14.642515 containerd[1886]: time="2025-02-13T15:52:14.642489216Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\""
Feb 13 15:52:14.642913 containerd[1886]: time="2025-02-13T15:52:14.642822475Z" level=info msg="TearDown network for sandbox \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" successfully"
Feb 13 15:52:14.642913 containerd[1886]: time="2025-02-13T15:52:14.642843085Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" returns successfully"
Feb 13 15:52:14.644889 containerd[1886]: time="2025-02-13T15:52:14.644861366Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\""
Feb 13 15:52:14.644991 containerd[1886]: time="2025-02-13T15:52:14.644954766Z" level=info msg="TearDown network for sandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" successfully"
Feb 13 15:52:14.644991 containerd[1886]: time="2025-02-13T15:52:14.644971661Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" returns successfully"
Feb 13 15:52:14.645836 containerd[1886]: time="2025-02-13T15:52:14.645680683Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\""
Feb 13 15:52:14.645939 containerd[1886]: time="2025-02-13T15:52:14.645874525Z" level=info msg="TearDown network for sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" successfully"
Feb 13 15:52:14.645939 containerd[1886]: time="2025-02-13T15:52:14.645894460Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" returns successfully"
Feb 13 15:52:14.646481 containerd[1886]: time="2025-02-13T15:52:14.646452495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:9,}"
Feb 13 15:52:14.944315 containerd[1886]: time="2025-02-13T15:52:14.944114329Z" level=error msg="Failed to destroy network for sandbox \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:14.947234 containerd[1886]: time="2025-02-13T15:52:14.944907013Z" level=error msg="encountered an error cleaning up failed sandbox \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:14.947234 containerd[1886]: time="2025-02-13T15:52:14.945239608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:9,} failed, error" error="failed to setup network for sandbox \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:14.947234 containerd[1886]: time="2025-02-13T15:52:14.946878075Z" level=error msg="Failed to destroy network for sandbox \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:14.947850 containerd[1886]: time="2025-02-13T15:52:14.947818293Z" level=error msg="encountered an error cleaning up failed sandbox \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:14.948033 containerd[1886]: time="2025-02-13T15:52:14.948008126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:14.948371 kubelet[2377]: E0213 15:52:14.948342    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:14.948648 kubelet[2377]: E0213 15:52:14.948581    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:14.948648 kubelet[2377]: E0213 15:52:14.948614    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:14.949086 kubelet[2377]: E0213 15:52:14.948899    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hswh5" podUID="c021f669-a6e0-4344-be54-42ff0a3b9776"
Feb 13 15:52:14.949086 kubelet[2377]: E0213 15:52:14.948342    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:14.949086 kubelet[2377]: E0213 15:52:14.948957    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:14.951051 kubelet[2377]: E0213 15:52:14.949320    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:14.951569 kubelet[2377]: E0213 15:52:14.951296    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-2dzlr" podUID="d5b6846b-905c-4b4d-ab65-740bbfe8f5eb"
Feb 13 15:52:15.063777 kubelet[2377]: E0213 15:52:15.061094    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:15.256748 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b-shm.mount: Deactivated successfully.
Feb 13 15:52:15.356880 update_engine[1871]: I20250213 15:52:15.356794  1871 update_attempter.cc:509] Updating boot flags...
Feb 13 15:52:15.562100 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3463)
Feb 13 15:52:15.644397 kubelet[2377]: I0213 15:52:15.644363    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef"
Feb 13 15:52:15.649758 containerd[1886]: time="2025-02-13T15:52:15.647131406Z" level=info msg="StopPodSandbox for \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\""
Feb 13 15:52:15.649758 containerd[1886]: time="2025-02-13T15:52:15.647399783Z" level=info msg="Ensure that sandbox 66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef in task-service has been cleanup successfully"
Feb 13 15:52:15.651261 systemd[1]: run-netns-cni\x2de808965a\x2d9627\x2db545\x2d2f10\x2d11731bfb19ab.mount: Deactivated successfully.
Feb 13 15:52:15.657236 containerd[1886]: time="2025-02-13T15:52:15.654855698Z" level=info msg="TearDown network for sandbox \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\" successfully"
Feb 13 15:52:15.657236 containerd[1886]: time="2025-02-13T15:52:15.654897676Z" level=info msg="StopPodSandbox for \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\" returns successfully"
Feb 13 15:52:15.661522 containerd[1886]: time="2025-02-13T15:52:15.659215488Z" level=info msg="StopPodSandbox for \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\""
Feb 13 15:52:15.661522 containerd[1886]: time="2025-02-13T15:52:15.659352522Z" level=info msg="TearDown network for sandbox \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\" successfully"
Feb 13 15:52:15.661522 containerd[1886]: time="2025-02-13T15:52:15.659414378Z" level=info msg="StopPodSandbox for \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\" returns successfully"
Feb 13 15:52:15.663340 containerd[1886]: time="2025-02-13T15:52:15.663303430Z" level=info msg="StopPodSandbox for \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\""
Feb 13 15:52:15.664373 containerd[1886]: time="2025-02-13T15:52:15.664335556Z" level=info msg="TearDown network for sandbox \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\" successfully"
Feb 13 15:52:15.666089 containerd[1886]: time="2025-02-13T15:52:15.666054801Z" level=info msg="StopPodSandbox for \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\" returns successfully"
Feb 13 15:52:15.669451 containerd[1886]: time="2025-02-13T15:52:15.669415067Z" level=info msg="StopPodSandbox for \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\""
Feb 13 15:52:15.671291 containerd[1886]: time="2025-02-13T15:52:15.671257142Z" level=info msg="TearDown network for sandbox \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\" successfully"
Feb 13 15:52:15.671491 containerd[1886]: time="2025-02-13T15:52:15.671466226Z" level=info msg="StopPodSandbox for \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\" returns successfully"
Feb 13 15:52:15.674335 containerd[1886]: time="2025-02-13T15:52:15.674300234Z" level=info msg="StopPodSandbox for \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\""
Feb 13 15:52:15.675756 containerd[1886]: time="2025-02-13T15:52:15.675516706Z" level=info msg="TearDown network for sandbox \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\" successfully"
Feb 13 15:52:15.675756 containerd[1886]: time="2025-02-13T15:52:15.675543450Z" level=info msg="StopPodSandbox for \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\" returns successfully"
Feb 13 15:52:15.678848 containerd[1886]: time="2025-02-13T15:52:15.678604440Z" level=info msg="StopPodSandbox for \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\""
Feb 13 15:52:15.678848 containerd[1886]: time="2025-02-13T15:52:15.678737117Z" level=info msg="TearDown network for sandbox \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\" successfully"
Feb 13 15:52:15.678848 containerd[1886]: time="2025-02-13T15:52:15.678751178Z" level=info msg="StopPodSandbox for \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\" returns successfully"
Feb 13 15:52:15.681944 containerd[1886]: time="2025-02-13T15:52:15.680398224Z" level=info msg="StopPodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\""
Feb 13 15:52:15.681944 containerd[1886]: time="2025-02-13T15:52:15.680511305Z" level=info msg="TearDown network for sandbox \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" successfully"
Feb 13 15:52:15.681944 containerd[1886]: time="2025-02-13T15:52:15.680527105Z" level=info msg="StopPodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" returns successfully"
Feb 13 15:52:15.682163 kubelet[2377]: I0213 15:52:15.680805    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b"
Feb 13 15:52:15.682552 containerd[1886]: time="2025-02-13T15:52:15.682240241Z" level=info msg="StopPodSandbox for \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\""
Feb 13 15:52:15.685965 containerd[1886]: time="2025-02-13T15:52:15.683502669Z" level=info msg="Ensure that sandbox bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b in task-service has been cleanup successfully"
Feb 13 15:52:15.687104 containerd[1886]: time="2025-02-13T15:52:15.686881313Z" level=info msg="TearDown network for sandbox \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\" successfully"
Feb 13 15:52:15.687104 containerd[1886]: time="2025-02-13T15:52:15.686915240Z" level=info msg="StopPodSandbox for \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\" returns successfully"
Feb 13 15:52:15.688905 systemd[1]: run-netns-cni\x2dc561373b\x2dd6fb\x2d3b71\x2d6da1\x2d76ef7fea07c6.mount: Deactivated successfully.
Feb 13 15:52:15.690315 containerd[1886]: time="2025-02-13T15:52:15.689901908Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\""
Feb 13 15:52:15.690315 containerd[1886]: time="2025-02-13T15:52:15.690025871Z" level=info msg="TearDown network for sandbox \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" successfully"
Feb 13 15:52:15.690315 containerd[1886]: time="2025-02-13T15:52:15.690040289Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" returns successfully"
Feb 13 15:52:15.692535 containerd[1886]: time="2025-02-13T15:52:15.692186216Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\""
Feb 13 15:52:15.692535 containerd[1886]: time="2025-02-13T15:52:15.692300384Z" level=info msg="TearDown network for sandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" successfully"
Feb 13 15:52:15.692535 containerd[1886]: time="2025-02-13T15:52:15.692321630Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" returns successfully"
Feb 13 15:52:15.692535 containerd[1886]: time="2025-02-13T15:52:15.692300600Z" level=info msg="StopPodSandbox for \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\""
Feb 13 15:52:15.692535 containerd[1886]: time="2025-02-13T15:52:15.692444115Z" level=info msg="TearDown network for sandbox \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\" successfully"
Feb 13 15:52:15.692535 containerd[1886]: time="2025-02-13T15:52:15.692456540Z" level=info msg="StopPodSandbox for \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\" returns successfully"
Feb 13 15:52:15.697065 containerd[1886]: time="2025-02-13T15:52:15.695712104Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\""
Feb 13 15:52:15.697065 containerd[1886]: time="2025-02-13T15:52:15.695852209Z" level=info msg="TearDown network for sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" successfully"
Feb 13 15:52:15.697065 containerd[1886]: time="2025-02-13T15:52:15.695870140Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" returns successfully"
Feb 13 15:52:15.697065 containerd[1886]: time="2025-02-13T15:52:15.695969493Z" level=info msg="StopPodSandbox for \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\""
Feb 13 15:52:15.697065 containerd[1886]: time="2025-02-13T15:52:15.696042698Z" level=info msg="TearDown network for sandbox \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\" successfully"
Feb 13 15:52:15.697065 containerd[1886]: time="2025-02-13T15:52:15.696055173Z" level=info msg="StopPodSandbox for \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\" returns successfully"
Feb 13 15:52:15.697065 containerd[1886]: time="2025-02-13T15:52:15.697067450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:10,}"
Feb 13 15:52:15.699781 containerd[1886]: time="2025-02-13T15:52:15.699225117Z" level=info msg="StopPodSandbox for \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\""
Feb 13 15:52:15.699781 containerd[1886]: time="2025-02-13T15:52:15.699346172Z" level=info msg="TearDown network for sandbox \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\" successfully"
Feb 13 15:52:15.699781 containerd[1886]: time="2025-02-13T15:52:15.699364465Z" level=info msg="StopPodSandbox for \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\" returns successfully"
Feb 13 15:52:15.703123 containerd[1886]: time="2025-02-13T15:52:15.703087345Z" level=info msg="StopPodSandbox for \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\""
Feb 13 15:52:15.703397 containerd[1886]: time="2025-02-13T15:52:15.703373121Z" level=info msg="TearDown network for sandbox \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\" successfully"
Feb 13 15:52:15.703575 containerd[1886]: time="2025-02-13T15:52:15.703425254Z" level=info msg="StopPodSandbox for \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\" returns successfully"
Feb 13 15:52:15.704998 containerd[1886]: time="2025-02-13T15:52:15.704969987Z" level=info msg="StopPodSandbox for \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\""
Feb 13 15:52:15.705467 containerd[1886]: time="2025-02-13T15:52:15.705082346Z" level=info msg="TearDown network for sandbox \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\" successfully"
Feb 13 15:52:15.705467 containerd[1886]: time="2025-02-13T15:52:15.705102604Z" level=info msg="StopPodSandbox for \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\" returns successfully"
Feb 13 15:52:15.708197 containerd[1886]: time="2025-02-13T15:52:15.707806850Z" level=info msg="StopPodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\""
Feb 13 15:52:15.709049 containerd[1886]: time="2025-02-13T15:52:15.709021263Z" level=info msg="TearDown network for sandbox \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" successfully"
Feb 13 15:52:15.710923 containerd[1886]: time="2025-02-13T15:52:15.709350020Z" level=info msg="StopPodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" returns successfully"
Feb 13 15:52:15.714191 containerd[1886]: time="2025-02-13T15:52:15.713857444Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\""
Feb 13 15:52:15.714191 containerd[1886]: time="2025-02-13T15:52:15.714031769Z" level=info msg="TearDown network for sandbox \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" successfully"
Feb 13 15:52:15.714191 containerd[1886]: time="2025-02-13T15:52:15.714051572Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" returns successfully"
Feb 13 15:52:15.725195 containerd[1886]: time="2025-02-13T15:52:15.714938401Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\""
Feb 13 15:52:15.725195 containerd[1886]: time="2025-02-13T15:52:15.715204452Z" level=info msg="TearDown network for sandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" successfully"
Feb 13 15:52:15.725195 containerd[1886]: time="2025-02-13T15:52:15.715223276Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" returns successfully"
Feb 13 15:52:15.725195 containerd[1886]: time="2025-02-13T15:52:15.715712194Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\""
Feb 13 15:52:15.725195 containerd[1886]: time="2025-02-13T15:52:15.715861290Z" level=info msg="TearDown network for sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" successfully"
Feb 13 15:52:15.725195 containerd[1886]: time="2025-02-13T15:52:15.715996432Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" returns successfully"
Feb 13 15:52:15.725195 containerd[1886]: time="2025-02-13T15:52:15.716819386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:10,}"
Feb 13 15:52:16.061706 kubelet[2377]: E0213 15:52:16.061644    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:16.109763 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3464)
Feb 13 15:52:16.261413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2095232179.mount: Deactivated successfully.
Feb 13 15:52:16.331650 containerd[1886]: time="2025-02-13T15:52:16.331528613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:16.334797 containerd[1886]: time="2025-02-13T15:52:16.334730929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010"
Feb 13 15:52:16.350170 containerd[1886]: time="2025-02-13T15:52:16.350098502Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:16.357313 containerd[1886]: time="2025-02-13T15:52:16.357254533Z" level=error msg="Failed to destroy network for sandbox \"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:16.359755 containerd[1886]: time="2025-02-13T15:52:16.358096484Z" level=error msg="encountered an error cleaning up failed sandbox \"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:16.359755 containerd[1886]: time="2025-02-13T15:52:16.358188739Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:10,} failed, error" error="failed to setup network for sandbox \"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:16.360143 kubelet[2377]: E0213 15:52:16.358502    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:16.360143 kubelet[2377]: E0213 15:52:16.358589    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:16.360143 kubelet[2377]: E0213 15:52:16.358639    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:16.367167 kubelet[2377]: E0213 15:52:16.359804    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-2dzlr" podUID="d5b6846b-905c-4b4d-ab65-740bbfe8f5eb"
Feb 13 15:52:16.374081 containerd[1886]: time="2025-02-13T15:52:16.367023967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:16.374081 containerd[1886]: time="2025-02-13T15:52:16.369924071Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 11.041213827s"
Feb 13 15:52:16.374081 containerd[1886]: time="2025-02-13T15:52:16.369998675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\""
Feb 13 15:52:16.364895 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4-shm.mount: Deactivated successfully.
Feb 13 15:52:16.406746 containerd[1886]: time="2025-02-13T15:52:16.406235817Z" level=info msg="CreateContainer within sandbox \"a4ccfb4ca23f05ad7ffc7c3f3b679a18e43b22dc63a968ae6e1abea3be4009f7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}"
Feb 13 15:52:16.430571 containerd[1886]: time="2025-02-13T15:52:16.430530691Z" level=error msg="Failed to destroy network for sandbox \"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:16.431299 containerd[1886]: time="2025-02-13T15:52:16.431221700Z" level=error msg="encountered an error cleaning up failed sandbox \"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:16.431424 containerd[1886]: time="2025-02-13T15:52:16.431315490Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:10,} failed, error" error="failed to setup network for sandbox \"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:16.432284 kubelet[2377]: E0213 15:52:16.431608    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:16.432284 kubelet[2377]: E0213 15:52:16.431673    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:16.432284 kubelet[2377]: E0213 15:52:16.431704    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:16.432495 kubelet[2377]: E0213 15:52:16.431787    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hswh5" podUID="c021f669-a6e0-4344-be54-42ff0a3b9776"
Feb 13 15:52:16.436998 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98-shm.mount: Deactivated successfully.
Feb 13 15:52:16.455250 containerd[1886]: time="2025-02-13T15:52:16.455194753Z" level=info msg="CreateContainer within sandbox \"a4ccfb4ca23f05ad7ffc7c3f3b679a18e43b22dc63a968ae6e1abea3be4009f7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c7dc4bb6dc6953d247461e0a4b868442236849c37982830fded6147c5e5f9970\""
Feb 13 15:52:16.456387 containerd[1886]: time="2025-02-13T15:52:16.456263170Z" level=info msg="StartContainer for \"c7dc4bb6dc6953d247461e0a4b868442236849c37982830fded6147c5e5f9970\""
Feb 13 15:52:16.554973 systemd[1]: Started cri-containerd-c7dc4bb6dc6953d247461e0a4b868442236849c37982830fded6147c5e5f9970.scope - libcontainer container c7dc4bb6dc6953d247461e0a4b868442236849c37982830fded6147c5e5f9970.
Feb 13 15:52:16.608763 containerd[1886]: time="2025-02-13T15:52:16.608610832Z" level=info msg="StartContainer for \"c7dc4bb6dc6953d247461e0a4b868442236849c37982830fded6147c5e5f9970\" returns successfully"
Feb 13 15:52:16.699796 kubelet[2377]: I0213 15:52:16.698548    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4"
Feb 13 15:52:16.700745 containerd[1886]: time="2025-02-13T15:52:16.700212571Z" level=info msg="StopPodSandbox for \"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\""
Feb 13 15:52:16.700745 containerd[1886]: time="2025-02-13T15:52:16.700517526Z" level=info msg="Ensure that sandbox 7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4 in task-service has been cleanup successfully"
Feb 13 15:52:16.701646 containerd[1886]: time="2025-02-13T15:52:16.701610216Z" level=info msg="TearDown network for sandbox \"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\" successfully"
Feb 13 15:52:16.702678 containerd[1886]: time="2025-02-13T15:52:16.701796069Z" level=info msg="StopPodSandbox for \"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\" returns successfully"
Feb 13 15:52:16.705068 containerd[1886]: time="2025-02-13T15:52:16.704923606Z" level=info msg="StopPodSandbox for \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\""
Feb 13 15:52:16.705231 containerd[1886]: time="2025-02-13T15:52:16.705053762Z" level=info msg="TearDown network for sandbox \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\" successfully"
Feb 13 15:52:16.705231 containerd[1886]: time="2025-02-13T15:52:16.705181416Z" level=info msg="StopPodSandbox for \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\" returns successfully"
Feb 13 15:52:16.708040 containerd[1886]: time="2025-02-13T15:52:16.707928493Z" level=info msg="StopPodSandbox for \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\""
Feb 13 15:52:16.709273 containerd[1886]: time="2025-02-13T15:52:16.708889549Z" level=info msg="TearDown network for sandbox \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\" successfully"
Feb 13 15:52:16.709273 containerd[1886]: time="2025-02-13T15:52:16.708919135Z" level=info msg="StopPodSandbox for \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\" returns successfully"
Feb 13 15:52:16.709429 containerd[1886]: time="2025-02-13T15:52:16.709360606Z" level=info msg="StopPodSandbox for \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\""
Feb 13 15:52:16.709474 containerd[1886]: time="2025-02-13T15:52:16.709450978Z" level=info msg="TearDown network for sandbox \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\" successfully"
Feb 13 15:52:16.709474 containerd[1886]: time="2025-02-13T15:52:16.709465166Z" level=info msg="StopPodSandbox for \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\" returns successfully"
Feb 13 15:52:16.711649 containerd[1886]: time="2025-02-13T15:52:16.710249497Z" level=info msg="StopPodSandbox for \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\""
Feb 13 15:52:16.711649 containerd[1886]: time="2025-02-13T15:52:16.710342579Z" level=info msg="TearDown network for sandbox \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\" successfully"
Feb 13 15:52:16.711649 containerd[1886]: time="2025-02-13T15:52:16.710362621Z" level=info msg="StopPodSandbox for \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\" returns successfully"
Feb 13 15:52:16.711649 containerd[1886]: time="2025-02-13T15:52:16.710969488Z" level=info msg="StopPodSandbox for \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\""
Feb 13 15:52:16.711649 containerd[1886]: time="2025-02-13T15:52:16.711056927Z" level=info msg="TearDown network for sandbox \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\" successfully"
Feb 13 15:52:16.711649 containerd[1886]: time="2025-02-13T15:52:16.711071609Z" level=info msg="StopPodSandbox for \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\" returns successfully"
Feb 13 15:52:16.714033 containerd[1886]: time="2025-02-13T15:52:16.711620364Z" level=info msg="StopPodSandbox for \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\""
Feb 13 15:52:16.716745 containerd[1886]: time="2025-02-13T15:52:16.714792628Z" level=info msg="TearDown network for sandbox \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\" successfully"
Feb 13 15:52:16.716745 containerd[1886]: time="2025-02-13T15:52:16.714822570Z" level=info msg="StopPodSandbox for \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\" returns successfully"
Feb 13 15:52:16.716745 containerd[1886]: time="2025-02-13T15:52:16.715522726Z" level=info msg="StopPodSandbox for \"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\""
Feb 13 15:52:16.716745 containerd[1886]: time="2025-02-13T15:52:16.715789400Z" level=info msg="Ensure that sandbox 1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98 in task-service has been cleanup successfully"
Feb 13 15:52:16.716745 containerd[1886]: time="2025-02-13T15:52:16.716038014Z" level=info msg="StopPodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\""
Feb 13 15:52:16.716745 containerd[1886]: time="2025-02-13T15:52:16.716120043Z" level=info msg="TearDown network for sandbox \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" successfully"
Feb 13 15:52:16.716745 containerd[1886]: time="2025-02-13T15:52:16.716133821Z" level=info msg="StopPodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" returns successfully"
Feb 13 15:52:16.719219 kubelet[2377]: I0213 15:52:16.714899    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98"
Feb 13 15:52:16.719283 containerd[1886]: time="2025-02-13T15:52:16.718949326Z" level=info msg="TearDown network for sandbox \"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\" successfully"
Feb 13 15:52:16.719283 containerd[1886]: time="2025-02-13T15:52:16.718995662Z" level=info msg="StopPodSandbox for \"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\" returns successfully"
Feb 13 15:52:16.723197 containerd[1886]: time="2025-02-13T15:52:16.722676333Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\""
Feb 13 15:52:16.723197 containerd[1886]: time="2025-02-13T15:52:16.723156149Z" level=info msg="StopPodSandbox for \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\""
Feb 13 15:52:16.723767 containerd[1886]: time="2025-02-13T15:52:16.723263270Z" level=info msg="TearDown network for sandbox \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\" successfully"
Feb 13 15:52:16.723767 containerd[1886]: time="2025-02-13T15:52:16.723277838Z" level=info msg="StopPodSandbox for \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\" returns successfully"
Feb 13 15:52:16.726287 containerd[1886]: time="2025-02-13T15:52:16.726136936Z" level=info msg="TearDown network for sandbox \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" successfully"
Feb 13 15:52:16.726287 containerd[1886]: time="2025-02-13T15:52:16.726165102Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" returns successfully"
Feb 13 15:52:16.726800 containerd[1886]: time="2025-02-13T15:52:16.726375807Z" level=info msg="StopPodSandbox for \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\""
Feb 13 15:52:16.726800 containerd[1886]: time="2025-02-13T15:52:16.726580343Z" level=info msg="TearDown network for sandbox \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\" successfully"
Feb 13 15:52:16.726800 containerd[1886]: time="2025-02-13T15:52:16.726691899Z" level=info msg="StopPodSandbox for \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\" returns successfully"
Feb 13 15:52:16.729370 containerd[1886]: time="2025-02-13T15:52:16.727295154Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\""
Feb 13 15:52:16.729370 containerd[1886]: time="2025-02-13T15:52:16.727501553Z" level=info msg="TearDown network for sandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" successfully"
Feb 13 15:52:16.729370 containerd[1886]: time="2025-02-13T15:52:16.727519281Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" returns successfully"
Feb 13 15:52:16.729370 containerd[1886]: time="2025-02-13T15:52:16.727603564Z" level=info msg="StopPodSandbox for \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\""
Feb 13 15:52:16.729370 containerd[1886]: time="2025-02-13T15:52:16.727680816Z" level=info msg="TearDown network for sandbox \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\" successfully"
Feb 13 15:52:16.729370 containerd[1886]: time="2025-02-13T15:52:16.727693462Z" level=info msg="StopPodSandbox for \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\" returns successfully"
Feb 13 15:52:16.729370 containerd[1886]: time="2025-02-13T15:52:16.728230517Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\""
Feb 13 15:52:16.729370 containerd[1886]: time="2025-02-13T15:52:16.728321894Z" level=info msg="TearDown network for sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" successfully"
Feb 13 15:52:16.729370 containerd[1886]: time="2025-02-13T15:52:16.728335893Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" returns successfully"
Feb 13 15:52:16.729370 containerd[1886]: time="2025-02-13T15:52:16.728421844Z" level=info msg="StopPodSandbox for \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\""
Feb 13 15:52:16.729370 containerd[1886]: time="2025-02-13T15:52:16.728489233Z" level=info msg="TearDown network for sandbox \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\" successfully"
Feb 13 15:52:16.729370 containerd[1886]: time="2025-02-13T15:52:16.728502014Z" level=info msg="StopPodSandbox for \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\" returns successfully"
Feb 13 15:52:16.729370 containerd[1886]: time="2025-02-13T15:52:16.729120977Z" level=info msg="StopPodSandbox for \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\""
Feb 13 15:52:16.729370 containerd[1886]: time="2025-02-13T15:52:16.729343916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:11,}"
Feb 13 15:52:16.730214 containerd[1886]: time="2025-02-13T15:52:16.729460214Z" level=info msg="TearDown network for sandbox \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\" successfully"
Feb 13 15:52:16.730214 containerd[1886]: time="2025-02-13T15:52:16.729475796Z" level=info msg="StopPodSandbox for \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\" returns successfully"
Feb 13 15:52:16.730214 containerd[1886]: time="2025-02-13T15:52:16.730031782Z" level=info msg="StopPodSandbox for \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\""
Feb 13 15:52:16.730214 containerd[1886]: time="2025-02-13T15:52:16.730153828Z" level=info msg="TearDown network for sandbox \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\" successfully"
Feb 13 15:52:16.730214 containerd[1886]: time="2025-02-13T15:52:16.730187965Z" level=info msg="StopPodSandbox for \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\" returns successfully"
Feb 13 15:52:16.735756 containerd[1886]: time="2025-02-13T15:52:16.730538285Z" level=info msg="StopPodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\""
Feb 13 15:52:16.735756 containerd[1886]: time="2025-02-13T15:52:16.730655915Z" level=info msg="TearDown network for sandbox \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" successfully"
Feb 13 15:52:16.735756 containerd[1886]: time="2025-02-13T15:52:16.730670147Z" level=info msg="StopPodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" returns successfully"
Feb 13 15:52:16.735756 containerd[1886]: time="2025-02-13T15:52:16.731195315Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\""
Feb 13 15:52:16.735756 containerd[1886]: time="2025-02-13T15:52:16.731280764Z" level=info msg="TearDown network for sandbox \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" successfully"
Feb 13 15:52:16.735756 containerd[1886]: time="2025-02-13T15:52:16.731294534Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" returns successfully"
Feb 13 15:52:16.735756 containerd[1886]: time="2025-02-13T15:52:16.731789459Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\""
Feb 13 15:52:16.735756 containerd[1886]: time="2025-02-13T15:52:16.731891443Z" level=info msg="TearDown network for sandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" successfully"
Feb 13 15:52:16.735756 containerd[1886]: time="2025-02-13T15:52:16.731946600Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" returns successfully"
Feb 13 15:52:16.735756 containerd[1886]: time="2025-02-13T15:52:16.732256854Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\""
Feb 13 15:52:16.735756 containerd[1886]: time="2025-02-13T15:52:16.732360181Z" level=info msg="TearDown network for sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" successfully"
Feb 13 15:52:16.735756 containerd[1886]: time="2025-02-13T15:52:16.732374559Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" returns successfully"
Feb 13 15:52:16.735756 containerd[1886]: time="2025-02-13T15:52:16.734269646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:11,}"
Feb 13 15:52:16.736330 kubelet[2377]: I0213 15:52:16.735465    2377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-ck9h2" podStartSLOduration=4.580147719 podStartE2EDuration="27.735416695s" podCreationTimestamp="2025-02-13 15:51:49 +0000 UTC" firstStartedPulling="2025-02-13 15:51:53.215013916 +0000 UTC m=+5.175999891" lastFinishedPulling="2025-02-13 15:52:16.370282893 +0000 UTC m=+28.331268867" observedRunningTime="2025-02-13 15:52:16.725710438 +0000 UTC m=+28.686696435" watchObservedRunningTime="2025-02-13 15:52:16.735416695 +0000 UTC m=+28.696402719"
Feb 13 15:52:16.742100 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
Feb 13 15:52:16.742192 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
Feb 13 15:52:17.062116 kubelet[2377]: E0213 15:52:17.062064    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:17.138772 containerd[1886]: 2025-02-13 15:52:17.007 [INFO][3771] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="07adc6b4266996b51153a0b156b6f1ec68ca1e02cb7dcc659779a86633120c86"
Feb 13 15:52:17.138772 containerd[1886]: 2025-02-13 15:52:17.007 [INFO][3771] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="07adc6b4266996b51153a0b156b6f1ec68ca1e02cb7dcc659779a86633120c86" iface="eth0" netns="/var/run/netns/cni-b1864d1f-c7da-e966-69a6-856cb452a841"
Feb 13 15:52:17.138772 containerd[1886]: 2025-02-13 15:52:17.008 [INFO][3771] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="07adc6b4266996b51153a0b156b6f1ec68ca1e02cb7dcc659779a86633120c86" iface="eth0" netns="/var/run/netns/cni-b1864d1f-c7da-e966-69a6-856cb452a841"
Feb 13 15:52:17.138772 containerd[1886]: 2025-02-13 15:52:17.015 [INFO][3771] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="07adc6b4266996b51153a0b156b6f1ec68ca1e02cb7dcc659779a86633120c86" iface="eth0" netns="/var/run/netns/cni-b1864d1f-c7da-e966-69a6-856cb452a841"
Feb 13 15:52:17.138772 containerd[1886]: 2025-02-13 15:52:17.017 [INFO][3771] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="07adc6b4266996b51153a0b156b6f1ec68ca1e02cb7dcc659779a86633120c86"
Feb 13 15:52:17.138772 containerd[1886]: 2025-02-13 15:52:17.020 [INFO][3771] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="07adc6b4266996b51153a0b156b6f1ec68ca1e02cb7dcc659779a86633120c86"
Feb 13 15:52:17.138772 containerd[1886]: 2025-02-13 15:52:17.109 [INFO][3805] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="07adc6b4266996b51153a0b156b6f1ec68ca1e02cb7dcc659779a86633120c86" HandleID="k8s-pod-network.07adc6b4266996b51153a0b156b6f1ec68ca1e02cb7dcc659779a86633120c86" Workload="172.31.28.66-k8s-nginx--deployment--6d5f899847--2dzlr-eth0"
Feb 13 15:52:17.138772 containerd[1886]: 2025-02-13 15:52:17.109 [INFO][3805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 15:52:17.138772 containerd[1886]: 2025-02-13 15:52:17.110 [INFO][3805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 15:52:17.138772 containerd[1886]: 2025-02-13 15:52:17.127 [WARNING][3805] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="07adc6b4266996b51153a0b156b6f1ec68ca1e02cb7dcc659779a86633120c86" HandleID="k8s-pod-network.07adc6b4266996b51153a0b156b6f1ec68ca1e02cb7dcc659779a86633120c86" Workload="172.31.28.66-k8s-nginx--deployment--6d5f899847--2dzlr-eth0"
Feb 13 15:52:17.138772 containerd[1886]: 2025-02-13 15:52:17.127 [INFO][3805] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="07adc6b4266996b51153a0b156b6f1ec68ca1e02cb7dcc659779a86633120c86" HandleID="k8s-pod-network.07adc6b4266996b51153a0b156b6f1ec68ca1e02cb7dcc659779a86633120c86" Workload="172.31.28.66-k8s-nginx--deployment--6d5f899847--2dzlr-eth0"
Feb 13 15:52:17.138772 containerd[1886]: 2025-02-13 15:52:17.131 [INFO][3805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 15:52:17.138772 containerd[1886]: 2025-02-13 15:52:17.136 [INFO][3771] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="07adc6b4266996b51153a0b156b6f1ec68ca1e02cb7dcc659779a86633120c86"
Feb 13 15:52:17.144062 containerd[1886]: time="2025-02-13T15:52:17.143556771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:11,} failed, error" error="failed to setup network for sandbox \"07adc6b4266996b51153a0b156b6f1ec68ca1e02cb7dcc659779a86633120c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:17.145131 kubelet[2377]: E0213 15:52:17.144882    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07adc6b4266996b51153a0b156b6f1ec68ca1e02cb7dcc659779a86633120c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:17.145131 kubelet[2377]: E0213 15:52:17.144951    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07adc6b4266996b51153a0b156b6f1ec68ca1e02cb7dcc659779a86633120c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:17.146226 kubelet[2377]: E0213 15:52:17.146188    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07adc6b4266996b51153a0b156b6f1ec68ca1e02cb7dcc659779a86633120c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-2dzlr"
Feb 13 15:52:17.146562 kubelet[2377]: E0213 15:52:17.146385    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-2dzlr_default(d5b6846b-905c-4b4d-ab65-740bbfe8f5eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07adc6b4266996b51153a0b156b6f1ec68ca1e02cb7dcc659779a86633120c86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-2dzlr" podUID="d5b6846b-905c-4b4d-ab65-740bbfe8f5eb"
Feb 13 15:52:17.155775 containerd[1886]: 2025-02-13 15:52:17.039 [INFO][3789] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a0df08265248b27ac1a46f7f9ee0d672594cff36eed551d80f5e764f928b698f"
Feb 13 15:52:17.155775 containerd[1886]: 2025-02-13 15:52:17.039 [INFO][3789] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a0df08265248b27ac1a46f7f9ee0d672594cff36eed551d80f5e764f928b698f" iface="eth0" netns="/var/run/netns/cni-c24991de-5753-c461-37d6-ac2c86d2c039"
Feb 13 15:52:17.155775 containerd[1886]: 2025-02-13 15:52:17.039 [INFO][3789] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a0df08265248b27ac1a46f7f9ee0d672594cff36eed551d80f5e764f928b698f" iface="eth0" netns="/var/run/netns/cni-c24991de-5753-c461-37d6-ac2c86d2c039"
Feb 13 15:52:17.155775 containerd[1886]: 2025-02-13 15:52:17.040 [INFO][3789] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="a0df08265248b27ac1a46f7f9ee0d672594cff36eed551d80f5e764f928b698f" iface="eth0" netns="/var/run/netns/cni-c24991de-5753-c461-37d6-ac2c86d2c039"
Feb 13 15:52:17.155775 containerd[1886]: 2025-02-13 15:52:17.040 [INFO][3789] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a0df08265248b27ac1a46f7f9ee0d672594cff36eed551d80f5e764f928b698f"
Feb 13 15:52:17.155775 containerd[1886]: 2025-02-13 15:52:17.040 [INFO][3789] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0df08265248b27ac1a46f7f9ee0d672594cff36eed551d80f5e764f928b698f"
Feb 13 15:52:17.155775 containerd[1886]: 2025-02-13 15:52:17.111 [INFO][3809] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a0df08265248b27ac1a46f7f9ee0d672594cff36eed551d80f5e764f928b698f" HandleID="k8s-pod-network.a0df08265248b27ac1a46f7f9ee0d672594cff36eed551d80f5e764f928b698f" Workload="172.31.28.66-k8s-csi--node--driver--hswh5-eth0"
Feb 13 15:52:17.155775 containerd[1886]: 2025-02-13 15:52:17.111 [INFO][3809] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 15:52:17.155775 containerd[1886]: 2025-02-13 15:52:17.131 [INFO][3809] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 15:52:17.155775 containerd[1886]: 2025-02-13 15:52:17.146 [WARNING][3809] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a0df08265248b27ac1a46f7f9ee0d672594cff36eed551d80f5e764f928b698f" HandleID="k8s-pod-network.a0df08265248b27ac1a46f7f9ee0d672594cff36eed551d80f5e764f928b698f" Workload="172.31.28.66-k8s-csi--node--driver--hswh5-eth0"
Feb 13 15:52:17.155775 containerd[1886]: 2025-02-13 15:52:17.146 [INFO][3809] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a0df08265248b27ac1a46f7f9ee0d672594cff36eed551d80f5e764f928b698f" HandleID="k8s-pod-network.a0df08265248b27ac1a46f7f9ee0d672594cff36eed551d80f5e764f928b698f" Workload="172.31.28.66-k8s-csi--node--driver--hswh5-eth0"
Feb 13 15:52:17.155775 containerd[1886]: 2025-02-13 15:52:17.152 [INFO][3809] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 15:52:17.155775 containerd[1886]: 2025-02-13 15:52:17.153 [INFO][3789] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a0df08265248b27ac1a46f7f9ee0d672594cff36eed551d80f5e764f928b698f"
Feb 13 15:52:17.165671 containerd[1886]: time="2025-02-13T15:52:17.165606826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:11,} failed, error" error="failed to setup network for sandbox \"a0df08265248b27ac1a46f7f9ee0d672594cff36eed551d80f5e764f928b698f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:17.166659 kubelet[2377]: E0213 15:52:17.166626    2377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0df08265248b27ac1a46f7f9ee0d672594cff36eed551d80f5e764f928b698f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:52:17.166812 kubelet[2377]: E0213 15:52:17.166690    2377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0df08265248b27ac1a46f7f9ee0d672594cff36eed551d80f5e764f928b698f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:17.166812 kubelet[2377]: E0213 15:52:17.166734    2377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0df08265248b27ac1a46f7f9ee0d672594cff36eed551d80f5e764f928b698f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hswh5"
Feb 13 15:52:17.166812 kubelet[2377]: E0213 15:52:17.166798    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hswh5_calico-system(c021f669-a6e0-4344-be54-42ff0a3b9776)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0df08265248b27ac1a46f7f9ee0d672594cff36eed551d80f5e764f928b698f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hswh5" podUID="c021f669-a6e0-4344-be54-42ff0a3b9776"
Feb 13 15:52:17.257454 systemd[1]: run-netns-cni\x2dfacf47ae\x2d1b77\x2d9a5a\x2da64f\x2d0f3e8f05e891.mount: Deactivated successfully.
Feb 13 15:52:17.258291 systemd[1]: run-netns-cni\x2dee14793f\x2db79c\x2de8fc\x2d1e1a\x2d9377b5471943.mount: Deactivated successfully.
Feb 13 15:52:17.718181 containerd[1886]: time="2025-02-13T15:52:17.717645227Z" level=info msg="StopPodSandbox for \"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\""
Feb 13 15:52:17.718181 containerd[1886]: time="2025-02-13T15:52:17.717953176Z" level=info msg="TearDown network for sandbox \"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\" successfully"
Feb 13 15:52:17.718181 containerd[1886]: time="2025-02-13T15:52:17.718017422Z" level=info msg="StopPodSandbox for \"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\" returns successfully"
Feb 13 15:52:17.719881 containerd[1886]: time="2025-02-13T15:52:17.718231416Z" level=info msg="StopPodSandbox for \"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\""
Feb 13 15:52:17.719881 containerd[1886]: time="2025-02-13T15:52:17.718370064Z" level=info msg="TearDown network for sandbox \"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\" successfully"
Feb 13 15:52:17.719881 containerd[1886]: time="2025-02-13T15:52:17.718393089Z" level=info msg="StopPodSandbox for \"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\" returns successfully"
Feb 13 15:52:17.719881 containerd[1886]: time="2025-02-13T15:52:17.719587838Z" level=info msg="StopPodSandbox for \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\""
Feb 13 15:52:17.719881 containerd[1886]: time="2025-02-13T15:52:17.719625255Z" level=info msg="StopPodSandbox for \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\""
Feb 13 15:52:17.719881 containerd[1886]: time="2025-02-13T15:52:17.719692225Z" level=info msg="TearDown network for sandbox \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\" successfully"
Feb 13 15:52:17.719881 containerd[1886]: time="2025-02-13T15:52:17.719706453Z" level=info msg="StopPodSandbox for \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\" returns successfully"
Feb 13 15:52:17.719881 containerd[1886]: time="2025-02-13T15:52:17.719716822Z" level=info msg="TearDown network for sandbox \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\" successfully"
Feb 13 15:52:17.719881 containerd[1886]: time="2025-02-13T15:52:17.719749661Z" level=info msg="StopPodSandbox for \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\" returns successfully"
Feb 13 15:52:17.720261 containerd[1886]: time="2025-02-13T15:52:17.720108132Z" level=info msg="StopPodSandbox for \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\""
Feb 13 15:52:17.720316 containerd[1886]: time="2025-02-13T15:52:17.720260214Z" level=info msg="TearDown network for sandbox \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\" successfully"
Feb 13 15:52:17.720316 containerd[1886]: time="2025-02-13T15:52:17.720275295Z" level=info msg="StopPodSandbox for \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\" returns successfully"
Feb 13 15:52:17.722030 containerd[1886]: time="2025-02-13T15:52:17.721619876Z" level=info msg="StopPodSandbox for \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\""
Feb 13 15:52:17.722030 containerd[1886]: time="2025-02-13T15:52:17.721619927Z" level=info msg="StopPodSandbox for \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\""
Feb 13 15:52:17.722030 containerd[1886]: time="2025-02-13T15:52:17.721943962Z" level=info msg="TearDown network for sandbox \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\" successfully"
Feb 13 15:52:17.722030 containerd[1886]: time="2025-02-13T15:52:17.721962454Z" level=info msg="StopPodSandbox for \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\" returns successfully"
Feb 13 15:52:17.722030 containerd[1886]: time="2025-02-13T15:52:17.721997290Z" level=info msg="TearDown network for sandbox \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\" successfully"
Feb 13 15:52:17.722030 containerd[1886]: time="2025-02-13T15:52:17.722011699Z" level=info msg="StopPodSandbox for \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\" returns successfully"
Feb 13 15:52:17.722482 containerd[1886]: time="2025-02-13T15:52:17.722354706Z" level=info msg="StopPodSandbox for \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\""
Feb 13 15:52:17.722482 containerd[1886]: time="2025-02-13T15:52:17.722444557Z" level=info msg="TearDown network for sandbox \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\" successfully"
Feb 13 15:52:17.722482 containerd[1886]: time="2025-02-13T15:52:17.722459628Z" level=info msg="StopPodSandbox for \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\" returns successfully"
Feb 13 15:52:17.722608 containerd[1886]: time="2025-02-13T15:52:17.722363741Z" level=info msg="StopPodSandbox for \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\""
Feb 13 15:52:17.722608 containerd[1886]: time="2025-02-13T15:52:17.722593894Z" level=info msg="TearDown network for sandbox \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\" successfully"
Feb 13 15:52:17.722713 containerd[1886]: time="2025-02-13T15:52:17.722606013Z" level=info msg="StopPodSandbox for \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\" returns successfully"
Feb 13 15:52:17.723073 containerd[1886]: time="2025-02-13T15:52:17.722968446Z" level=info msg="StopPodSandbox for \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\""
Feb 13 15:52:17.723256 containerd[1886]: time="2025-02-13T15:52:17.723056080Z" level=info msg="StopPodSandbox for \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\""
Feb 13 15:52:17.723256 containerd[1886]: time="2025-02-13T15:52:17.723248138Z" level=info msg="TearDown network for sandbox \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\" successfully"
Feb 13 15:52:17.723351 containerd[1886]: time="2025-02-13T15:52:17.723264648Z" level=info msg="StopPodSandbox for \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\" returns successfully"
Feb 13 15:52:17.723351 containerd[1886]: time="2025-02-13T15:52:17.723061067Z" level=info msg="TearDown network for sandbox \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\" successfully"
Feb 13 15:52:17.723351 containerd[1886]: time="2025-02-13T15:52:17.723324350Z" level=info msg="StopPodSandbox for \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\" returns successfully"
Feb 13 15:52:17.723941 containerd[1886]: time="2025-02-13T15:52:17.723873114Z" level=info msg="StopPodSandbox for \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\""
Feb 13 15:52:17.724156 containerd[1886]: time="2025-02-13T15:52:17.724107187Z" level=info msg="TearDown network for sandbox \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\" successfully"
Feb 13 15:52:17.724214 containerd[1886]: time="2025-02-13T15:52:17.724166572Z" level=info msg="StopPodSandbox for \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\" returns successfully"
Feb 13 15:52:17.724263 containerd[1886]: time="2025-02-13T15:52:17.723873552Z" level=info msg="StopPodSandbox for \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\""
Feb 13 15:52:17.724305 containerd[1886]: time="2025-02-13T15:52:17.724291834Z" level=info msg="TearDown network for sandbox \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\" successfully"
Feb 13 15:52:17.724353 containerd[1886]: time="2025-02-13T15:52:17.724306502Z" level=info msg="StopPodSandbox for \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\" returns successfully"
Feb 13 15:52:17.727669 containerd[1886]: time="2025-02-13T15:52:17.727617144Z" level=info msg="StopPodSandbox for \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\""
Feb 13 15:52:17.727786 containerd[1886]: time="2025-02-13T15:52:17.727617266Z" level=info msg="StopPodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\""
Feb 13 15:52:17.727962 containerd[1886]: time="2025-02-13T15:52:17.727896073Z" level=info msg="TearDown network for sandbox \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" successfully"
Feb 13 15:52:17.727962 containerd[1886]: time="2025-02-13T15:52:17.727946752Z" level=info msg="StopPodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" returns successfully"
Feb 13 15:52:17.728088 containerd[1886]: time="2025-02-13T15:52:17.727896153Z" level=info msg="TearDown network for sandbox \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\" successfully"
Feb 13 15:52:17.728088 containerd[1886]: time="2025-02-13T15:52:17.728038604Z" level=info msg="StopPodSandbox for \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\" returns successfully"
Feb 13 15:52:17.729228 containerd[1886]: time="2025-02-13T15:52:17.729192568Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\""
Feb 13 15:52:17.729311 containerd[1886]: time="2025-02-13T15:52:17.729298483Z" level=info msg="TearDown network for sandbox \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" successfully"
Feb 13 15:52:17.729355 containerd[1886]: time="2025-02-13T15:52:17.729315219Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" returns successfully"
Feb 13 15:52:17.729425 containerd[1886]: time="2025-02-13T15:52:17.729396079Z" level=info msg="StopPodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\""
Feb 13 15:52:17.729506 containerd[1886]: time="2025-02-13T15:52:17.729482671Z" level=info msg="TearDown network for sandbox \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" successfully"
Feb 13 15:52:17.729565 containerd[1886]: time="2025-02-13T15:52:17.729502274Z" level=info msg="StopPodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" returns successfully"
Feb 13 15:52:17.730591 containerd[1886]: time="2025-02-13T15:52:17.730551917Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\""
Feb 13 15:52:17.730673 containerd[1886]: time="2025-02-13T15:52:17.730649721Z" level=info msg="TearDown network for sandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" successfully"
Feb 13 15:52:17.730673 containerd[1886]: time="2025-02-13T15:52:17.730664719Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" returns successfully"
Feb 13 15:52:17.730969 containerd[1886]: time="2025-02-13T15:52:17.730800103Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\""
Feb 13 15:52:17.730969 containerd[1886]: time="2025-02-13T15:52:17.730892393Z" level=info msg="TearDown network for sandbox \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" successfully"
Feb 13 15:52:17.730969 containerd[1886]: time="2025-02-13T15:52:17.730907259Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" returns successfully"
Feb 13 15:52:17.731159 containerd[1886]: time="2025-02-13T15:52:17.731023424Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\""
Feb 13 15:52:17.731159 containerd[1886]: time="2025-02-13T15:52:17.731102673Z" level=info msg="TearDown network for sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" successfully"
Feb 13 15:52:17.731159 containerd[1886]: time="2025-02-13T15:52:17.731116429Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" returns successfully"
Feb 13 15:52:17.732745 containerd[1886]: time="2025-02-13T15:52:17.732670916Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\""
Feb 13 15:52:17.732895 containerd[1886]: time="2025-02-13T15:52:17.732778386Z" level=info msg="TearDown network for sandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" successfully"
Feb 13 15:52:17.732895 containerd[1886]: time="2025-02-13T15:52:17.732795623Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" returns successfully"
Feb 13 15:52:17.733972 containerd[1886]: time="2025-02-13T15:52:17.733942713Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\""
Feb 13 15:52:17.734293 containerd[1886]: time="2025-02-13T15:52:17.734048606Z" level=info msg="TearDown network for sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" successfully"
Feb 13 15:52:17.734293 containerd[1886]: time="2025-02-13T15:52:17.734069030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:11,}"
Feb 13 15:52:17.735051 containerd[1886]: time="2025-02-13T15:52:17.734774668Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" returns successfully"
Feb 13 15:52:17.737741 containerd[1886]: time="2025-02-13T15:52:17.737381789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:11,}"
Feb 13 15:52:18.062999 kubelet[2377]: E0213 15:52:18.062942    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:18.087871 (udev-worker)[3468]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:52:18.088894 systemd-networkd[1728]: calid3be34dfc94: Link UP
Feb 13 15:52:18.089625 systemd-networkd[1728]: calid3be34dfc94: Gained carrier
Feb 13 15:52:18.131775 containerd[1886]: 2025-02-13 15:52:17.836 [INFO][3830] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Feb 13 15:52:18.131775 containerd[1886]: 2025-02-13 15:52:17.866 [INFO][3830] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.28.66-k8s-nginx--deployment--6d5f899847--2dzlr-eth0 nginx-deployment-6d5f899847- default  d5b6846b-905c-4b4d-ab65-740bbfe8f5eb 1142 0 2025-02-13 15:52:05 +0000 UTC <nil> <nil> map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s  172.31.28.66  nginx-deployment-6d5f899847-2dzlr eth0 default [] []   [kns.default ksa.default.default] calid3be34dfc94  [] []}} ContainerID="58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b" Namespace="default" Pod="nginx-deployment-6d5f899847-2dzlr" WorkloadEndpoint="172.31.28.66-k8s-nginx--deployment--6d5f899847--2dzlr-"
Feb 13 15:52:18.131775 containerd[1886]: 2025-02-13 15:52:17.866 [INFO][3830] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b" Namespace="default" Pod="nginx-deployment-6d5f899847-2dzlr" WorkloadEndpoint="172.31.28.66-k8s-nginx--deployment--6d5f899847--2dzlr-eth0"
Feb 13 15:52:18.131775 containerd[1886]: 2025-02-13 15:52:17.922 [INFO][3857] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b" HandleID="k8s-pod-network.58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b" Workload="172.31.28.66-k8s-nginx--deployment--6d5f899847--2dzlr-eth0"
Feb 13 15:52:18.131775 containerd[1886]: 2025-02-13 15:52:17.952 [INFO][3857] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b" HandleID="k8s-pod-network.58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b" Workload="172.31.28.66-k8s-nginx--deployment--6d5f899847--2dzlr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319420), Attrs:map[string]string{"namespace":"default", "node":"172.31.28.66", "pod":"nginx-deployment-6d5f899847-2dzlr", "timestamp":"2025-02-13 15:52:17.922106408 +0000 UTC"}, Hostname:"172.31.28.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 15:52:18.131775 containerd[1886]: 2025-02-13 15:52:17.952 [INFO][3857] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 15:52:18.131775 containerd[1886]: 2025-02-13 15:52:17.952 [INFO][3857] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 15:52:18.131775 containerd[1886]: 2025-02-13 15:52:17.952 [INFO][3857] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.28.66'
Feb 13 15:52:18.131775 containerd[1886]: 2025-02-13 15:52:17.960 [INFO][3857] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b" host="172.31.28.66"
Feb 13 15:52:18.131775 containerd[1886]: 2025-02-13 15:52:17.978 [INFO][3857] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.28.66"
Feb 13 15:52:18.131775 containerd[1886]: 2025-02-13 15:52:18.003 [INFO][3857] ipam/ipam.go 489: Trying affinity for 192.168.66.64/26 host="172.31.28.66"
Feb 13 15:52:18.131775 containerd[1886]: 2025-02-13 15:52:18.008 [INFO][3857] ipam/ipam.go 155: Attempting to load block cidr=192.168.66.64/26 host="172.31.28.66"
Feb 13 15:52:18.131775 containerd[1886]: 2025-02-13 15:52:18.016 [INFO][3857] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="172.31.28.66"
Feb 13 15:52:18.131775 containerd[1886]: 2025-02-13 15:52:18.016 [INFO][3857] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b" host="172.31.28.66"
Feb 13 15:52:18.131775 containerd[1886]: 2025-02-13 15:52:18.028 [INFO][3857] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b
Feb 13 15:52:18.131775 containerd[1886]: 2025-02-13 15:52:18.042 [INFO][3857] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b" host="172.31.28.66"
Feb 13 15:52:18.131775 containerd[1886]: 2025-02-13 15:52:18.074 [INFO][3857] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.66.65/26] block=192.168.66.64/26 handle="k8s-pod-network.58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b" host="172.31.28.66"
Feb 13 15:52:18.131775 containerd[1886]: 2025-02-13 15:52:18.074 [INFO][3857] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.66.65/26] handle="k8s-pod-network.58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b" host="172.31.28.66"
Feb 13 15:52:18.131775 containerd[1886]: 2025-02-13 15:52:18.074 [INFO][3857] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 15:52:18.131775 containerd[1886]: 2025-02-13 15:52:18.074 [INFO][3857] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.65/26] IPv6=[] ContainerID="58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b" HandleID="k8s-pod-network.58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b" Workload="172.31.28.66-k8s-nginx--deployment--6d5f899847--2dzlr-eth0"
Feb 13 15:52:18.135676 containerd[1886]: 2025-02-13 15:52:18.076 [INFO][3830] cni-plugin/k8s.go 386: Populated endpoint ContainerID="58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b" Namespace="default" Pod="nginx-deployment-6d5f899847-2dzlr" WorkloadEndpoint="172.31.28.66-k8s-nginx--deployment--6d5f899847--2dzlr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.66-k8s-nginx--deployment--6d5f899847--2dzlr-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"d5b6846b-905c-4b4d-ab65-740bbfe8f5eb", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 52, 5, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.66", ContainerID:"", Pod:"nginx-deployment-6d5f899847-2dzlr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.66.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calid3be34dfc94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:52:18.135676 containerd[1886]: 2025-02-13 15:52:18.076 [INFO][3830] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.66.65/32] ContainerID="58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b" Namespace="default" Pod="nginx-deployment-6d5f899847-2dzlr" WorkloadEndpoint="172.31.28.66-k8s-nginx--deployment--6d5f899847--2dzlr-eth0"
Feb 13 15:52:18.135676 containerd[1886]: 2025-02-13 15:52:18.076 [INFO][3830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3be34dfc94 ContainerID="58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b" Namespace="default" Pod="nginx-deployment-6d5f899847-2dzlr" WorkloadEndpoint="172.31.28.66-k8s-nginx--deployment--6d5f899847--2dzlr-eth0"
Feb 13 15:52:18.135676 containerd[1886]: 2025-02-13 15:52:18.090 [INFO][3830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b" Namespace="default" Pod="nginx-deployment-6d5f899847-2dzlr" WorkloadEndpoint="172.31.28.66-k8s-nginx--deployment--6d5f899847--2dzlr-eth0"
Feb 13 15:52:18.135676 containerd[1886]: 2025-02-13 15:52:18.090 [INFO][3830] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b" Namespace="default" Pod="nginx-deployment-6d5f899847-2dzlr" WorkloadEndpoint="172.31.28.66-k8s-nginx--deployment--6d5f899847--2dzlr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.66-k8s-nginx--deployment--6d5f899847--2dzlr-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"d5b6846b-905c-4b4d-ab65-740bbfe8f5eb", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 52, 5, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.66", ContainerID:"58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b", Pod:"nginx-deployment-6d5f899847-2dzlr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.66.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calid3be34dfc94", MAC:"82:52:43:90:59:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:52:18.135676 containerd[1886]: 2025-02-13 15:52:18.117 [INFO][3830] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b" Namespace="default" Pod="nginx-deployment-6d5f899847-2dzlr" WorkloadEndpoint="172.31.28.66-k8s-nginx--deployment--6d5f899847--2dzlr-eth0"
Feb 13 15:52:18.215421 containerd[1886]: time="2025-02-13T15:52:18.204816806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:52:18.221470 systemd-networkd[1728]: cali28d44f7357c: Link UP
Feb 13 15:52:18.223545 systemd-networkd[1728]: cali28d44f7357c: Gained carrier
Feb 13 15:52:18.225463 containerd[1886]: time="2025-02-13T15:52:18.218766528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:52:18.225463 containerd[1886]: time="2025-02-13T15:52:18.218817203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:52:18.225463 containerd[1886]: time="2025-02-13T15:52:18.223874554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:52:18.258184 containerd[1886]: 2025-02-13 15:52:17.840 [INFO][3831] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Feb 13 15:52:18.258184 containerd[1886]: 2025-02-13 15:52:17.866 [INFO][3831] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.28.66-k8s-csi--node--driver--hswh5-eth0 csi-node-driver- calico-system  c021f669-a6e0-4344-be54-42ff0a3b9776 1143 0 2025-02-13 15:51:49 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s  172.31.28.66  csi-node-driver-hswh5 eth0 csi-node-driver [] []   [kns.calico-system ksa.calico-system.csi-node-driver] cali28d44f7357c  [] []}} ContainerID="2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb" Namespace="calico-system" Pod="csi-node-driver-hswh5" WorkloadEndpoint="172.31.28.66-k8s-csi--node--driver--hswh5-"
Feb 13 15:52:18.258184 containerd[1886]: 2025-02-13 15:52:17.866 [INFO][3831] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb" Namespace="calico-system" Pod="csi-node-driver-hswh5" WorkloadEndpoint="172.31.28.66-k8s-csi--node--driver--hswh5-eth0"
Feb 13 15:52:18.258184 containerd[1886]: 2025-02-13 15:52:17.924 [INFO][3856] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb" HandleID="k8s-pod-network.2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb" Workload="172.31.28.66-k8s-csi--node--driver--hswh5-eth0"
Feb 13 15:52:18.258184 containerd[1886]: 2025-02-13 15:52:17.960 [INFO][3856] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb" HandleID="k8s-pod-network.2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb" Workload="172.31.28.66-k8s-csi--node--driver--hswh5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.28.66", "pod":"csi-node-driver-hswh5", "timestamp":"2025-02-13 15:52:17.924864225 +0000 UTC"}, Hostname:"172.31.28.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 15:52:18.258184 containerd[1886]: 2025-02-13 15:52:17.960 [INFO][3856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 15:52:18.258184 containerd[1886]: 2025-02-13 15:52:18.074 [INFO][3856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 15:52:18.258184 containerd[1886]: 2025-02-13 15:52:18.075 [INFO][3856] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.28.66'
Feb 13 15:52:18.258184 containerd[1886]: 2025-02-13 15:52:18.085 [INFO][3856] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb" host="172.31.28.66"
Feb 13 15:52:18.258184 containerd[1886]: 2025-02-13 15:52:18.106 [INFO][3856] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.28.66"
Feb 13 15:52:18.258184 containerd[1886]: 2025-02-13 15:52:18.129 [INFO][3856] ipam/ipam.go 489: Trying affinity for 192.168.66.64/26 host="172.31.28.66"
Feb 13 15:52:18.258184 containerd[1886]: 2025-02-13 15:52:18.156 [INFO][3856] ipam/ipam.go 155: Attempting to load block cidr=192.168.66.64/26 host="172.31.28.66"
Feb 13 15:52:18.258184 containerd[1886]: 2025-02-13 15:52:18.164 [INFO][3856] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="172.31.28.66"
Feb 13 15:52:18.258184 containerd[1886]: 2025-02-13 15:52:18.164 [INFO][3856] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb" host="172.31.28.66"
Feb 13 15:52:18.258184 containerd[1886]: 2025-02-13 15:52:18.174 [INFO][3856] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb
Feb 13 15:52:18.258184 containerd[1886]: 2025-02-13 15:52:18.192 [INFO][3856] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb" host="172.31.28.66"
Feb 13 15:52:18.258184 containerd[1886]: 2025-02-13 15:52:18.207 [INFO][3856] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.66.66/26] block=192.168.66.64/26 handle="k8s-pod-network.2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb" host="172.31.28.66"
Feb 13 15:52:18.258184 containerd[1886]: 2025-02-13 15:52:18.208 [INFO][3856] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.66.66/26] handle="k8s-pod-network.2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb" host="172.31.28.66"
Feb 13 15:52:18.258184 containerd[1886]: 2025-02-13 15:52:18.208 [INFO][3856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 15:52:18.258184 containerd[1886]: 2025-02-13 15:52:18.208 [INFO][3856] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.66/26] IPv6=[] ContainerID="2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb" HandleID="k8s-pod-network.2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb" Workload="172.31.28.66-k8s-csi--node--driver--hswh5-eth0"
Feb 13 15:52:18.259981 containerd[1886]: 2025-02-13 15:52:18.210 [INFO][3831] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb" Namespace="calico-system" Pod="csi-node-driver-hswh5" WorkloadEndpoint="172.31.28.66-k8s-csi--node--driver--hswh5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.66-k8s-csi--node--driver--hswh5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c021f669-a6e0-4344-be54-42ff0a3b9776", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 51, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.66", ContainerID:"", Pod:"csi-node-driver-hswh5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28d44f7357c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:52:18.259981 containerd[1886]: 2025-02-13 15:52:18.211 [INFO][3831] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.66.66/32] ContainerID="2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb" Namespace="calico-system" Pod="csi-node-driver-hswh5" WorkloadEndpoint="172.31.28.66-k8s-csi--node--driver--hswh5-eth0"
Feb 13 15:52:18.259981 containerd[1886]: 2025-02-13 15:52:18.211 [INFO][3831] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28d44f7357c ContainerID="2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb" Namespace="calico-system" Pod="csi-node-driver-hswh5" WorkloadEndpoint="172.31.28.66-k8s-csi--node--driver--hswh5-eth0"
Feb 13 15:52:18.259981 containerd[1886]: 2025-02-13 15:52:18.222 [INFO][3831] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb" Namespace="calico-system" Pod="csi-node-driver-hswh5" WorkloadEndpoint="172.31.28.66-k8s-csi--node--driver--hswh5-eth0"
Feb 13 15:52:18.259981 containerd[1886]: 2025-02-13 15:52:18.222 [INFO][3831] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb" Namespace="calico-system" Pod="csi-node-driver-hswh5" WorkloadEndpoint="172.31.28.66-k8s-csi--node--driver--hswh5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.66-k8s-csi--node--driver--hswh5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c021f669-a6e0-4344-be54-42ff0a3b9776", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 51, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.66", ContainerID:"2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb", Pod:"csi-node-driver-hswh5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28d44f7357c", MAC:"e6:6f:16:ed:d7:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:52:18.259981 containerd[1886]: 2025-02-13 15:52:18.247 [INFO][3831] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb" Namespace="calico-system" Pod="csi-node-driver-hswh5" WorkloadEndpoint="172.31.28.66-k8s-csi--node--driver--hswh5-eth0"
Feb 13 15:52:18.299868 systemd[1]: Started cri-containerd-58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b.scope - libcontainer container 58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b.
Feb 13 15:52:18.344580 containerd[1886]: time="2025-02-13T15:52:18.344227194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:52:18.344580 containerd[1886]: time="2025-02-13T15:52:18.344300141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:52:18.346190 containerd[1886]: time="2025-02-13T15:52:18.344320990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:52:18.352843 containerd[1886]: time="2025-02-13T15:52:18.352693370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:52:18.412005 systemd[1]: Started cri-containerd-2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb.scope - libcontainer container 2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb.
Feb 13 15:52:18.514124 containerd[1886]: time="2025-02-13T15:52:18.514073284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2dzlr,Uid:d5b6846b-905c-4b4d-ab65-740bbfe8f5eb,Namespace:default,Attempt:11,} returns sandbox id \"58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b\""
Feb 13 15:52:18.518240 containerd[1886]: time="2025-02-13T15:52:18.518105580Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb 13 15:52:18.524763 containerd[1886]: time="2025-02-13T15:52:18.524618551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hswh5,Uid:c021f669-a6e0-4344-be54-42ff0a3b9776,Namespace:calico-system,Attempt:11,} returns sandbox id \"2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb\""
Feb 13 15:52:18.879763 kernel: bpftool[4074]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
Feb 13 15:52:19.063591 kubelet[2377]: E0213 15:52:19.063539    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:19.261948 systemd-networkd[1728]: cali28d44f7357c: Gained IPv6LL
Feb 13 15:52:19.376961 systemd-networkd[1728]: vxlan.calico: Link UP
Feb 13 15:52:19.377370 systemd-networkd[1728]: vxlan.calico: Gained carrier
Feb 13 15:52:19.379708 (udev-worker)[3454]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:52:19.676226 kubelet[2377]: I0213 15:52:19.676173    2377 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 15:52:20.036197 systemd-networkd[1728]: calid3be34dfc94: Gained IPv6LL
Feb 13 15:52:20.073630 kubelet[2377]: E0213 15:52:20.064375    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:20.677468 systemd-networkd[1728]: vxlan.calico: Gained IPv6LL
Feb 13 15:52:21.065133 kubelet[2377]: E0213 15:52:21.065090    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:22.066834 kubelet[2377]: E0213 15:52:22.065888    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:22.715149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3584834042.mount: Deactivated successfully.
Feb 13 15:52:23.068226 kubelet[2377]: E0213 15:52:23.068183    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:23.656271 ntpd[1865]: Listen normally on 7 vxlan.calico 192.168.66.64:123
Feb 13 15:52:23.657017 ntpd[1865]: Listen normally on 8 calid3be34dfc94 [fe80::ecee:eeff:feee:eeee%3]:123
Feb 13 15:52:23.657500 ntpd[1865]: 13 Feb 15:52:23 ntpd[1865]: Listen normally on 7 vxlan.calico 192.168.66.64:123
Feb 13 15:52:23.657500 ntpd[1865]: 13 Feb 15:52:23 ntpd[1865]: Listen normally on 8 calid3be34dfc94 [fe80::ecee:eeff:feee:eeee%3]:123
Feb 13 15:52:23.657500 ntpd[1865]: 13 Feb 15:52:23 ntpd[1865]: Listen normally on 9 cali28d44f7357c [fe80::ecee:eeff:feee:eeee%4]:123
Feb 13 15:52:23.657500 ntpd[1865]: 13 Feb 15:52:23 ntpd[1865]: Listen normally on 10 vxlan.calico [fe80::6452:c6ff:fe12:c82%5]:123
Feb 13 15:52:23.657078 ntpd[1865]: Listen normally on 9 cali28d44f7357c [fe80::ecee:eeff:feee:eeee%4]:123
Feb 13 15:52:23.657117 ntpd[1865]: Listen normally on 10 vxlan.calico [fe80::6452:c6ff:fe12:c82%5]:123
Feb 13 15:52:24.069872 kubelet[2377]: E0213 15:52:24.068882    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:24.772081 containerd[1886]: time="2025-02-13T15:52:24.771689006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:24.774585 containerd[1886]: time="2025-02-13T15:52:24.774536128Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493"
Feb 13 15:52:24.791417 containerd[1886]: time="2025-02-13T15:52:24.790645335Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:24.795842 containerd[1886]: time="2025-02-13T15:52:24.795181023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:24.798108 containerd[1886]: time="2025-02-13T15:52:24.796998665Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 6.278842537s"
Feb 13 15:52:24.798108 containerd[1886]: time="2025-02-13T15:52:24.797045184Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\""
Feb 13 15:52:24.798776 containerd[1886]: time="2025-02-13T15:52:24.798457662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\""
Feb 13 15:52:24.920800 containerd[1886]: time="2025-02-13T15:52:24.920667242Z" level=info msg="CreateContainer within sandbox \"58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b\" for container &ContainerMetadata{Name:nginx,Attempt:0,}"
Feb 13 15:52:24.974292 containerd[1886]: time="2025-02-13T15:52:24.974191249Z" level=info msg="CreateContainer within sandbox \"58c36c96b75b6222cb9544a1216caba12aea860aaae4045b122876ae9801ae0b\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"973a002ba3660c53dc8d0466b5554d558e8aecf07011edff4441b8d45ea52753\""
Feb 13 15:52:24.975811 containerd[1886]: time="2025-02-13T15:52:24.975766256Z" level=info msg="StartContainer for \"973a002ba3660c53dc8d0466b5554d558e8aecf07011edff4441b8d45ea52753\""
Feb 13 15:52:25.069875 kubelet[2377]: E0213 15:52:25.069717    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:25.077027 systemd[1]: run-containerd-runc-k8s.io-973a002ba3660c53dc8d0466b5554d558e8aecf07011edff4441b8d45ea52753-runc.JZvXXX.mount: Deactivated successfully.
Feb 13 15:52:25.095416 systemd[1]: Started cri-containerd-973a002ba3660c53dc8d0466b5554d558e8aecf07011edff4441b8d45ea52753.scope - libcontainer container 973a002ba3660c53dc8d0466b5554d558e8aecf07011edff4441b8d45ea52753.
Feb 13 15:52:25.152330 containerd[1886]: time="2025-02-13T15:52:25.152281848Z" level=info msg="StartContainer for \"973a002ba3660c53dc8d0466b5554d558e8aecf07011edff4441b8d45ea52753\" returns successfully"
Feb 13 15:52:26.070641 kubelet[2377]: E0213 15:52:26.070592    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:26.634753 containerd[1886]: time="2025-02-13T15:52:26.634372864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:26.639216 containerd[1886]: time="2025-02-13T15:52:26.639131371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632"
Feb 13 15:52:26.640999 containerd[1886]: time="2025-02-13T15:52:26.640929824Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:26.647331 containerd[1886]: time="2025-02-13T15:52:26.646499117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:26.647331 containerd[1886]: time="2025-02-13T15:52:26.647190425Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.848617672s"
Feb 13 15:52:26.647331 containerd[1886]: time="2025-02-13T15:52:26.647228930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\""
Feb 13 15:52:26.649344 containerd[1886]: time="2025-02-13T15:52:26.649306218Z" level=info msg="CreateContainer within sandbox \"2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}"
Feb 13 15:52:26.742758 containerd[1886]: time="2025-02-13T15:52:26.742688225Z" level=info msg="CreateContainer within sandbox \"2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9976af5ccb4f9aab0b05767f0ad026de80524db2fecba1828acb2ccc5c6aac1d\""
Feb 13 15:52:26.743281 containerd[1886]: time="2025-02-13T15:52:26.743247025Z" level=info msg="StartContainer for \"9976af5ccb4f9aab0b05767f0ad026de80524db2fecba1828acb2ccc5c6aac1d\""
Feb 13 15:52:26.790122 systemd[1]: run-containerd-runc-k8s.io-9976af5ccb4f9aab0b05767f0ad026de80524db2fecba1828acb2ccc5c6aac1d-runc.UNcmlE.mount: Deactivated successfully.
Feb 13 15:52:26.800967 systemd[1]: Started cri-containerd-9976af5ccb4f9aab0b05767f0ad026de80524db2fecba1828acb2ccc5c6aac1d.scope - libcontainer container 9976af5ccb4f9aab0b05767f0ad026de80524db2fecba1828acb2ccc5c6aac1d.
Feb 13 15:52:26.850182 containerd[1886]: time="2025-02-13T15:52:26.850027659Z" level=info msg="StartContainer for \"9976af5ccb4f9aab0b05767f0ad026de80524db2fecba1828acb2ccc5c6aac1d\" returns successfully"
Feb 13 15:52:26.851554 containerd[1886]: time="2025-02-13T15:52:26.851514729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\""
Feb 13 15:52:27.070993 kubelet[2377]: E0213 15:52:27.070929    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:28.071139 kubelet[2377]: E0213 15:52:28.071078    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:28.627861 containerd[1886]: time="2025-02-13T15:52:28.627807805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:28.634175 containerd[1886]: time="2025-02-13T15:52:28.634098907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081"
Feb 13 15:52:28.636163 containerd[1886]: time="2025-02-13T15:52:28.636096329Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:28.640739 containerd[1886]: time="2025-02-13T15:52:28.640657752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:28.641597 containerd[1886]: time="2025-02-13T15:52:28.641553182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.789994917s"
Feb 13 15:52:28.641694 containerd[1886]: time="2025-02-13T15:52:28.641602935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\""
Feb 13 15:52:28.654217 containerd[1886]: time="2025-02-13T15:52:28.654163265Z" level=info msg="CreateContainer within sandbox \"2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}"
Feb 13 15:52:28.686578 containerd[1886]: time="2025-02-13T15:52:28.686436283Z" level=info msg="CreateContainer within sandbox \"2955f312296ad10e5472e3347be7d09c85483d9f104ac7fcdf8a1b48378597bb\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2b2c5bdedb0ee24ff5b570fa8480f85edb80eade8a806f2f0790ca690fb27911\""
Feb 13 15:52:28.688869 containerd[1886]: time="2025-02-13T15:52:28.688745173Z" level=info msg="StartContainer for \"2b2c5bdedb0ee24ff5b570fa8480f85edb80eade8a806f2f0790ca690fb27911\""
Feb 13 15:52:28.742532 systemd[1]: run-containerd-runc-k8s.io-2b2c5bdedb0ee24ff5b570fa8480f85edb80eade8a806f2f0790ca690fb27911-runc.czzQGw.mount: Deactivated successfully.
Feb 13 15:52:28.750967 systemd[1]: Started cri-containerd-2b2c5bdedb0ee24ff5b570fa8480f85edb80eade8a806f2f0790ca690fb27911.scope - libcontainer container 2b2c5bdedb0ee24ff5b570fa8480f85edb80eade8a806f2f0790ca690fb27911.
Feb 13 15:52:28.795534 containerd[1886]: time="2025-02-13T15:52:28.795218448Z" level=info msg="StartContainer for \"2b2c5bdedb0ee24ff5b570fa8480f85edb80eade8a806f2f0790ca690fb27911\" returns successfully"
Feb 13 15:52:28.837322 kubelet[2377]: I0213 15:52:28.837289    2377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-2dzlr" podStartSLOduration=17.555165279 podStartE2EDuration="23.837242758s" podCreationTimestamp="2025-02-13 15:52:05 +0000 UTC" firstStartedPulling="2025-02-13 15:52:18.515514667 +0000 UTC m=+30.476500638" lastFinishedPulling="2025-02-13 15:52:24.797592144 +0000 UTC m=+36.758578117" observedRunningTime="2025-02-13 15:52:25.795887901 +0000 UTC m=+37.756873883" watchObservedRunningTime="2025-02-13 15:52:28.837242758 +0000 UTC m=+40.798228795"
Feb 13 15:52:29.024496 kubelet[2377]: E0213 15:52:29.024416    2377 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:29.071785 kubelet[2377]: E0213 15:52:29.071720    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:29.178113 kubelet[2377]: I0213 15:52:29.178078    2377 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0
Feb 13 15:52:29.179969 kubelet[2377]: I0213 15:52:29.179939    2377 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock
Feb 13 15:52:30.072345 kubelet[2377]: E0213 15:52:30.072244    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:31.073489 kubelet[2377]: E0213 15:52:31.073427    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:32.074687 kubelet[2377]: E0213 15:52:32.074626    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:33.074894 kubelet[2377]: E0213 15:52:33.074834    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:34.075811 kubelet[2377]: E0213 15:52:34.075763    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:35.056480 kubelet[2377]: I0213 15:52:35.056433    2377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-hswh5" podStartSLOduration=35.935552041 podStartE2EDuration="46.056339105s" podCreationTimestamp="2025-02-13 15:51:49 +0000 UTC" firstStartedPulling="2025-02-13 15:52:18.53038556 +0000 UTC m=+30.491371525" lastFinishedPulling="2025-02-13 15:52:28.651172618 +0000 UTC m=+40.612158589" observedRunningTime="2025-02-13 15:52:28.838021944 +0000 UTC m=+40.799007920" watchObservedRunningTime="2025-02-13 15:52:35.056339105 +0000 UTC m=+47.017325074"
Feb 13 15:52:35.056777 kubelet[2377]: I0213 15:52:35.056705    2377 topology_manager.go:215] "Topology Admit Handler" podUID="c0b64072-2028-4ccf-bf9b-ebe4e6021476" podNamespace="default" podName="nfs-server-provisioner-0"
Feb 13 15:52:35.069137 systemd[1]: Created slice kubepods-besteffort-podc0b64072_2028_4ccf_bf9b_ebe4e6021476.slice - libcontainer container kubepods-besteffort-podc0b64072_2028_4ccf_bf9b_ebe4e6021476.slice.
Feb 13 15:52:35.076995 kubelet[2377]: E0213 15:52:35.076482    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:35.211265 kubelet[2377]: I0213 15:52:35.211227    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c0b64072-2028-4ccf-bf9b-ebe4e6021476-data\") pod \"nfs-server-provisioner-0\" (UID: \"c0b64072-2028-4ccf-bf9b-ebe4e6021476\") " pod="default/nfs-server-provisioner-0"
Feb 13 15:52:35.211433 kubelet[2377]: I0213 15:52:35.211290    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqbwb\" (UniqueName: \"kubernetes.io/projected/c0b64072-2028-4ccf-bf9b-ebe4e6021476-kube-api-access-cqbwb\") pod \"nfs-server-provisioner-0\" (UID: \"c0b64072-2028-4ccf-bf9b-ebe4e6021476\") " pod="default/nfs-server-provisioner-0"
Feb 13 15:52:35.373924 containerd[1886]: time="2025-02-13T15:52:35.373788089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c0b64072-2028-4ccf-bf9b-ebe4e6021476,Namespace:default,Attempt:0,}"
Feb 13 15:52:35.688606 systemd-networkd[1728]: cali60e51b789ff: Link UP
Feb 13 15:52:35.689927 (udev-worker)[4394]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:52:35.691444 systemd-networkd[1728]: cali60e51b789ff: Gained carrier
Feb 13 15:52:35.709906 containerd[1886]: 2025-02-13 15:52:35.504 [INFO][4399] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.28.66-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default  c0b64072-2028-4ccf-bf9b-ebe4e6021476 1237 0 2025-02-13 15:52:35 +0000 UTC <nil> <nil> map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s  172.31.28.66  nfs-server-provisioner-0 eth0 nfs-server-provisioner [] []   [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff  [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.66-k8s-nfs--server--provisioner--0-"
Feb 13 15:52:35.709906 containerd[1886]: 2025-02-13 15:52:35.506 [INFO][4399] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.66-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:52:35.709906 containerd[1886]: 2025-02-13 15:52:35.585 [INFO][4411] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081" HandleID="k8s-pod-network.a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081" Workload="172.31.28.66-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:52:35.709906 containerd[1886]: 2025-02-13 15:52:35.616 [INFO][4411] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081" HandleID="k8s-pod-network.a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081" Workload="172.31.28.66-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334c80), Attrs:map[string]string{"namespace":"default", "node":"172.31.28.66", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 15:52:35.585510014 +0000 UTC"}, Hostname:"172.31.28.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 15:52:35.709906 containerd[1886]: 2025-02-13 15:52:35.616 [INFO][4411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 15:52:35.709906 containerd[1886]: 2025-02-13 15:52:35.616 [INFO][4411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 15:52:35.709906 containerd[1886]: 2025-02-13 15:52:35.616 [INFO][4411] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.28.66'
Feb 13 15:52:35.709906 containerd[1886]: 2025-02-13 15:52:35.622 [INFO][4411] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081" host="172.31.28.66"
Feb 13 15:52:35.709906 containerd[1886]: 2025-02-13 15:52:35.635 [INFO][4411] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.28.66"
Feb 13 15:52:35.709906 containerd[1886]: 2025-02-13 15:52:35.650 [INFO][4411] ipam/ipam.go 489: Trying affinity for 192.168.66.64/26 host="172.31.28.66"
Feb 13 15:52:35.709906 containerd[1886]: 2025-02-13 15:52:35.653 [INFO][4411] ipam/ipam.go 155: Attempting to load block cidr=192.168.66.64/26 host="172.31.28.66"
Feb 13 15:52:35.709906 containerd[1886]: 2025-02-13 15:52:35.658 [INFO][4411] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="172.31.28.66"
Feb 13 15:52:35.709906 containerd[1886]: 2025-02-13 15:52:35.658 [INFO][4411] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081" host="172.31.28.66"
Feb 13 15:52:35.709906 containerd[1886]: 2025-02-13 15:52:35.660 [INFO][4411] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081
Feb 13 15:52:35.709906 containerd[1886]: 2025-02-13 15:52:35.672 [INFO][4411] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081" host="172.31.28.66"
Feb 13 15:52:35.709906 containerd[1886]: 2025-02-13 15:52:35.681 [INFO][4411] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.66.67/26] block=192.168.66.64/26 handle="k8s-pod-network.a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081" host="172.31.28.66"
Feb 13 15:52:35.709906 containerd[1886]: 2025-02-13 15:52:35.682 [INFO][4411] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.66.67/26] handle="k8s-pod-network.a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081" host="172.31.28.66"
Feb 13 15:52:35.709906 containerd[1886]: 2025-02-13 15:52:35.682 [INFO][4411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 15:52:35.709906 containerd[1886]: 2025-02-13 15:52:35.682 [INFO][4411] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.67/26] IPv6=[] ContainerID="a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081" HandleID="k8s-pod-network.a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081" Workload="172.31.28.66-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:52:35.711087 containerd[1886]: 2025-02-13 15:52:35.684 [INFO][4399] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.66-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.66-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"c0b64072-2028-4ccf-bf9b-ebe4e6021476", ResourceVersion:"1237", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 52, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.66", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.66.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:52:35.711087 containerd[1886]: 2025-02-13 15:52:35.684 [INFO][4399] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.66.67/32] ContainerID="a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.66-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:52:35.711087 containerd[1886]: 2025-02-13 15:52:35.684 [INFO][4399] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.66-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:52:35.711087 containerd[1886]: 2025-02-13 15:52:35.687 [INFO][4399] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.66-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:52:35.711390 containerd[1886]: 2025-02-13 15:52:35.690 [INFO][4399] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.66-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.66-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"c0b64072-2028-4ccf-bf9b-ebe4e6021476", ResourceVersion:"1237", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 52, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.66", ContainerID:"a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.66.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"6a:f2:f5:c7:33:61", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:52:35.711390 containerd[1886]: 2025-02-13 15:52:35.705 [INFO][4399] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.66-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:52:35.855656 containerd[1886]: time="2025-02-13T15:52:35.855518475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:52:35.855928 containerd[1886]: time="2025-02-13T15:52:35.855615253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:52:35.855928 containerd[1886]: time="2025-02-13T15:52:35.855637978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:52:35.856109 containerd[1886]: time="2025-02-13T15:52:35.855943647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:52:35.893962 systemd[1]: Started cri-containerd-a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081.scope - libcontainer container a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081.
Feb 13 15:52:35.955290 containerd[1886]: time="2025-02-13T15:52:35.955100315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c0b64072-2028-4ccf-bf9b-ebe4e6021476,Namespace:default,Attempt:0,} returns sandbox id \"a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081\""
Feb 13 15:52:35.959152 containerd[1886]: time="2025-02-13T15:52:35.958772095Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\""
Feb 13 15:52:36.078845 kubelet[2377]: E0213 15:52:36.078798    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:36.995828 systemd-networkd[1728]: cali60e51b789ff: Gained IPv6LL
Feb 13 15:52:37.083540 kubelet[2377]: E0213 15:52:37.082402    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:38.085896 kubelet[2377]: E0213 15:52:38.085856    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:39.087860 kubelet[2377]: E0213 15:52:39.087696    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:39.656265 ntpd[1865]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123
Feb 13 15:52:39.657821 ntpd[1865]: 13 Feb 15:52:39 ntpd[1865]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123
Feb 13 15:52:40.089506 kubelet[2377]: E0213 15:52:40.089463    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:40.248897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3102663358.mount: Deactivated successfully.
Feb 13 15:52:41.090629 kubelet[2377]: E0213 15:52:41.090196    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:42.091282 kubelet[2377]: E0213 15:52:42.091238    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:43.093208 kubelet[2377]: E0213 15:52:43.092979    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:43.644181 containerd[1886]: time="2025-02-13T15:52:43.644123198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:43.646074 containerd[1886]: time="2025-02-13T15:52:43.645744963Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406"
Feb 13 15:52:43.647811 containerd[1886]: time="2025-02-13T15:52:43.647764250Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:43.652862 containerd[1886]: time="2025-02-13T15:52:43.651490512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:52:43.652862 containerd[1886]: time="2025-02-13T15:52:43.652680101Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 7.693862861s"
Feb 13 15:52:43.652862 containerd[1886]: time="2025-02-13T15:52:43.652740747Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\""
Feb 13 15:52:43.656314 containerd[1886]: time="2025-02-13T15:52:43.656274426Z" level=info msg="CreateContainer within sandbox \"a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}"
Feb 13 15:52:43.678866 containerd[1886]: time="2025-02-13T15:52:43.678761661Z" level=info msg="CreateContainer within sandbox \"a778885579ee6a93f979ef01e49ee473006c59f4b552041df364ed42414de081\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"87849bed20f1b6bb3d7e9974871ef4a57088d5f3c36870ceef66cd08bcf41031\""
Feb 13 15:52:43.681697 containerd[1886]: time="2025-02-13T15:52:43.680459801Z" level=info msg="StartContainer for \"87849bed20f1b6bb3d7e9974871ef4a57088d5f3c36870ceef66cd08bcf41031\""
Feb 13 15:52:43.725004 systemd[1]: Started cri-containerd-87849bed20f1b6bb3d7e9974871ef4a57088d5f3c36870ceef66cd08bcf41031.scope - libcontainer container 87849bed20f1b6bb3d7e9974871ef4a57088d5f3c36870ceef66cd08bcf41031.
Feb 13 15:52:43.762989 containerd[1886]: time="2025-02-13T15:52:43.762932324Z" level=info msg="StartContainer for \"87849bed20f1b6bb3d7e9974871ef4a57088d5f3c36870ceef66cd08bcf41031\" returns successfully"
Feb 13 15:52:44.094408 kubelet[2377]: E0213 15:52:44.094048    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:45.095066 kubelet[2377]: E0213 15:52:45.094945    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:46.095750 kubelet[2377]: E0213 15:52:46.095625    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:47.095943 kubelet[2377]: E0213 15:52:47.095887    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:48.096487 kubelet[2377]: E0213 15:52:48.096427    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:49.025614 kubelet[2377]: E0213 15:52:49.025561    2377 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:49.063839 containerd[1886]: time="2025-02-13T15:52:49.063788488Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\""
Feb 13 15:52:49.064742 containerd[1886]: time="2025-02-13T15:52:49.064203064Z" level=info msg="TearDown network for sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" successfully"
Feb 13 15:52:49.064742 containerd[1886]: time="2025-02-13T15:52:49.064277925Z" level=info msg="StopPodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" returns successfully"
Feb 13 15:52:49.085974 containerd[1886]: time="2025-02-13T15:52:49.085757616Z" level=info msg="RemovePodSandbox for \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\""
Feb 13 15:52:49.103619 kubelet[2377]: E0213 15:52:49.102277    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:49.108202 containerd[1886]: time="2025-02-13T15:52:49.108080136Z" level=info msg="Forcibly stopping sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\""
Feb 13 15:52:49.108475 containerd[1886]: time="2025-02-13T15:52:49.108290800Z" level=info msg="TearDown network for sandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" successfully"
Feb 13 15:52:49.138151 containerd[1886]: time="2025-02-13T15:52:49.138098377Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.138304 containerd[1886]: time="2025-02-13T15:52:49.138188292Z" level=info msg="RemovePodSandbox \"a70083bc0894d7f51bae91184c692b4738fe4af190ab3bc169493f403f08f1ad\" returns successfully"
Feb 13 15:52:49.139350 containerd[1886]: time="2025-02-13T15:52:49.139319752Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\""
Feb 13 15:52:49.139617 containerd[1886]: time="2025-02-13T15:52:49.139588890Z" level=info msg="TearDown network for sandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" successfully"
Feb 13 15:52:49.139617 containerd[1886]: time="2025-02-13T15:52:49.139613858Z" level=info msg="StopPodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" returns successfully"
Feb 13 15:52:49.140243 containerd[1886]: time="2025-02-13T15:52:49.140212625Z" level=info msg="RemovePodSandbox for \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\""
Feb 13 15:52:49.140243 containerd[1886]: time="2025-02-13T15:52:49.140246054Z" level=info msg="Forcibly stopping sandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\""
Feb 13 15:52:49.140401 containerd[1886]: time="2025-02-13T15:52:49.140341521Z" level=info msg="TearDown network for sandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" successfully"
Feb 13 15:52:49.172880 containerd[1886]: time="2025-02-13T15:52:49.172353596Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.173038 containerd[1886]: time="2025-02-13T15:52:49.172911191Z" level=info msg="RemovePodSandbox \"0e77898dd262e883c8d36e6c0124702883f64eb56b0b0d78db88036d05b70ece\" returns successfully"
Feb 13 15:52:49.175176 containerd[1886]: time="2025-02-13T15:52:49.174843810Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\""
Feb 13 15:52:49.175897 containerd[1886]: time="2025-02-13T15:52:49.175308591Z" level=info msg="TearDown network for sandbox \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" successfully"
Feb 13 15:52:49.175897 containerd[1886]: time="2025-02-13T15:52:49.175621758Z" level=info msg="StopPodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" returns successfully"
Feb 13 15:52:49.177184 containerd[1886]: time="2025-02-13T15:52:49.177158512Z" level=info msg="RemovePodSandbox for \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\""
Feb 13 15:52:49.177941 containerd[1886]: time="2025-02-13T15:52:49.177195030Z" level=info msg="Forcibly stopping sandbox \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\""
Feb 13 15:52:49.178033 containerd[1886]: time="2025-02-13T15:52:49.177708752Z" level=info msg="TearDown network for sandbox \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" successfully"
Feb 13 15:52:49.193567 containerd[1886]: time="2025-02-13T15:52:49.193348934Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.193567 containerd[1886]: time="2025-02-13T15:52:49.193421327Z" level=info msg="RemovePodSandbox \"507cd8da34a43acd17fb45ec54d0bcd3e1f1c48387b4e32622fe286ff86d6273\" returns successfully"
Feb 13 15:52:49.197817 containerd[1886]: time="2025-02-13T15:52:49.197740039Z" level=info msg="StopPodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\""
Feb 13 15:52:49.197976 containerd[1886]: time="2025-02-13T15:52:49.197913549Z" level=info msg="TearDown network for sandbox \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" successfully"
Feb 13 15:52:49.197976 containerd[1886]: time="2025-02-13T15:52:49.197932338Z" level=info msg="StopPodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" returns successfully"
Feb 13 15:52:49.205173 containerd[1886]: time="2025-02-13T15:52:49.201399336Z" level=info msg="RemovePodSandbox for \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\""
Feb 13 15:52:49.205173 containerd[1886]: time="2025-02-13T15:52:49.201591206Z" level=info msg="Forcibly stopping sandbox \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\""
Feb 13 15:52:49.205173 containerd[1886]: time="2025-02-13T15:52:49.202048582Z" level=info msg="TearDown network for sandbox \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" successfully"
Feb 13 15:52:49.212275 containerd[1886]: time="2025-02-13T15:52:49.212101508Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.212429 containerd[1886]: time="2025-02-13T15:52:49.212287502Z" level=info msg="RemovePodSandbox \"13c7da6c505525ec4664bac24749d880e574da487ae161cd1d2d0dbb3ca5fcf2\" returns successfully"
Feb 13 15:52:49.214484 containerd[1886]: time="2025-02-13T15:52:49.213396316Z" level=info msg="StopPodSandbox for \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\""
Feb 13 15:52:49.214484 containerd[1886]: time="2025-02-13T15:52:49.214353239Z" level=info msg="TearDown network for sandbox \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\" successfully"
Feb 13 15:52:49.214484 containerd[1886]: time="2025-02-13T15:52:49.214376675Z" level=info msg="StopPodSandbox for \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\" returns successfully"
Feb 13 15:52:49.225223 containerd[1886]: time="2025-02-13T15:52:49.225169142Z" level=info msg="RemovePodSandbox for \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\""
Feb 13 15:52:49.228258 containerd[1886]: time="2025-02-13T15:52:49.228218874Z" level=info msg="Forcibly stopping sandbox \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\""
Feb 13 15:52:49.234190 containerd[1886]: time="2025-02-13T15:52:49.228378708Z" level=info msg="TearDown network for sandbox \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\" successfully"
Feb 13 15:52:49.250657 containerd[1886]: time="2025-02-13T15:52:49.250554900Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.250657 containerd[1886]: time="2025-02-13T15:52:49.250637015Z" level=info msg="RemovePodSandbox \"5031dda4cd07408b77a058132650b88ae38f2d02f893f21bfaff43ef25186239\" returns successfully"
Feb 13 15:52:49.251216 containerd[1886]: time="2025-02-13T15:52:49.251185325Z" level=info msg="StopPodSandbox for \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\""
Feb 13 15:52:49.251548 containerd[1886]: time="2025-02-13T15:52:49.251310115Z" level=info msg="TearDown network for sandbox \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\" successfully"
Feb 13 15:52:49.251548 containerd[1886]: time="2025-02-13T15:52:49.251328426Z" level=info msg="StopPodSandbox for \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\" returns successfully"
Feb 13 15:52:49.252044 containerd[1886]: time="2025-02-13T15:52:49.252005612Z" level=info msg="RemovePodSandbox for \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\""
Feb 13 15:52:49.252044 containerd[1886]: time="2025-02-13T15:52:49.252039053Z" level=info msg="Forcibly stopping sandbox \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\""
Feb 13 15:52:49.252281 containerd[1886]: time="2025-02-13T15:52:49.252193816Z" level=info msg="TearDown network for sandbox \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\" successfully"
Feb 13 15:52:49.282204 containerd[1886]: time="2025-02-13T15:52:49.281915921Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.282204 containerd[1886]: time="2025-02-13T15:52:49.281983678Z" level=info msg="RemovePodSandbox \"b54985976f93ec98acb1f6c64b2a4f81bf7b4217fb38f358d9cf04c7b107bfed\" returns successfully"
Feb 13 15:52:49.298401 containerd[1886]: time="2025-02-13T15:52:49.292024414Z" level=info msg="StopPodSandbox for \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\""
Feb 13 15:52:49.298401 containerd[1886]: time="2025-02-13T15:52:49.297471011Z" level=info msg="TearDown network for sandbox \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\" successfully"
Feb 13 15:52:49.298401 containerd[1886]: time="2025-02-13T15:52:49.297512026Z" level=info msg="StopPodSandbox for \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\" returns successfully"
Feb 13 15:52:49.311207 containerd[1886]: time="2025-02-13T15:52:49.311166546Z" level=info msg="RemovePodSandbox for \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\""
Feb 13 15:52:49.311536 containerd[1886]: time="2025-02-13T15:52:49.311507859Z" level=info msg="Forcibly stopping sandbox \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\""
Feb 13 15:52:49.314393 containerd[1886]: time="2025-02-13T15:52:49.312872112Z" level=info msg="TearDown network for sandbox \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\" successfully"
Feb 13 15:52:49.322238 containerd[1886]: time="2025-02-13T15:52:49.322128189Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.322796 containerd[1886]: time="2025-02-13T15:52:49.322705283Z" level=info msg="RemovePodSandbox \"28b007eae3918273378c8d3afb3489ced802c1dbe7f4037011ad02bd1b521d04\" returns successfully"
Feb 13 15:52:49.326153 containerd[1886]: time="2025-02-13T15:52:49.325682861Z" level=info msg="StopPodSandbox for \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\""
Feb 13 15:52:49.326153 containerd[1886]: time="2025-02-13T15:52:49.326026440Z" level=info msg="TearDown network for sandbox \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\" successfully"
Feb 13 15:52:49.326153 containerd[1886]: time="2025-02-13T15:52:49.326047842Z" level=info msg="StopPodSandbox for \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\" returns successfully"
Feb 13 15:52:49.327069 containerd[1886]: time="2025-02-13T15:52:49.327045810Z" level=info msg="RemovePodSandbox for \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\""
Feb 13 15:52:49.328497 containerd[1886]: time="2025-02-13T15:52:49.327801468Z" level=info msg="Forcibly stopping sandbox \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\""
Feb 13 15:52:49.328497 containerd[1886]: time="2025-02-13T15:52:49.327905676Z" level=info msg="TearDown network for sandbox \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\" successfully"
Feb 13 15:52:49.347798 containerd[1886]: time="2025-02-13T15:52:49.347714847Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.347963 containerd[1886]: time="2025-02-13T15:52:49.347822283Z" level=info msg="RemovePodSandbox \"0392f8e3b3d807076d6e07a73366be64cb6b28f4987cad1b43f03150e9d66e92\" returns successfully"
Feb 13 15:52:49.349414 containerd[1886]: time="2025-02-13T15:52:49.349185276Z" level=info msg="StopPodSandbox for \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\""
Feb 13 15:52:49.349414 containerd[1886]: time="2025-02-13T15:52:49.349354282Z" level=info msg="TearDown network for sandbox \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\" successfully"
Feb 13 15:52:49.349414 containerd[1886]: time="2025-02-13T15:52:49.349366993Z" level=info msg="StopPodSandbox for \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\" returns successfully"
Feb 13 15:52:49.350111 containerd[1886]: time="2025-02-13T15:52:49.350083140Z" level=info msg="RemovePodSandbox for \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\""
Feb 13 15:52:49.350221 containerd[1886]: time="2025-02-13T15:52:49.350118571Z" level=info msg="Forcibly stopping sandbox \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\""
Feb 13 15:52:49.350269 containerd[1886]: time="2025-02-13T15:52:49.350206651Z" level=info msg="TearDown network for sandbox \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\" successfully"
Feb 13 15:52:49.373970 containerd[1886]: time="2025-02-13T15:52:49.372865327Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.373970 containerd[1886]: time="2025-02-13T15:52:49.373893265Z" level=info msg="RemovePodSandbox \"ea2fefc700c82378b3d0f7b759d6ca67ce7def623473554ac186adcd6c741ea1\" returns successfully"
Feb 13 15:52:49.376039 containerd[1886]: time="2025-02-13T15:52:49.375988713Z" level=info msg="StopPodSandbox for \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\""
Feb 13 15:52:49.376275 containerd[1886]: time="2025-02-13T15:52:49.376220681Z" level=info msg="TearDown network for sandbox \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\" successfully"
Feb 13 15:52:49.376275 containerd[1886]: time="2025-02-13T15:52:49.376238859Z" level=info msg="StopPodSandbox for \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\" returns successfully"
Feb 13 15:52:49.379055 containerd[1886]: time="2025-02-13T15:52:49.379014822Z" level=info msg="RemovePodSandbox for \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\""
Feb 13 15:52:49.379187 containerd[1886]: time="2025-02-13T15:52:49.379065234Z" level=info msg="Forcibly stopping sandbox \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\""
Feb 13 15:52:49.379234 containerd[1886]: time="2025-02-13T15:52:49.379161570Z" level=info msg="TearDown network for sandbox \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\" successfully"
Feb 13 15:52:49.419277 containerd[1886]: time="2025-02-13T15:52:49.419220823Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.419458 containerd[1886]: time="2025-02-13T15:52:49.419310937Z" level=info msg="RemovePodSandbox \"bf321585bac2f321fb50b95cc7baf035ee46dfeb39b276dc29dc96f85526e50b\" returns successfully"
Feb 13 15:52:49.420270 containerd[1886]: time="2025-02-13T15:52:49.420228773Z" level=info msg="StopPodSandbox for \"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\""
Feb 13 15:52:49.420511 containerd[1886]: time="2025-02-13T15:52:49.420347388Z" level=info msg="TearDown network for sandbox \"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\" successfully"
Feb 13 15:52:49.420511 containerd[1886]: time="2025-02-13T15:52:49.420501572Z" level=info msg="StopPodSandbox for \"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\" returns successfully"
Feb 13 15:52:49.421200 containerd[1886]: time="2025-02-13T15:52:49.421173141Z" level=info msg="RemovePodSandbox for \"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\""
Feb 13 15:52:49.421315 containerd[1886]: time="2025-02-13T15:52:49.421203130Z" level=info msg="Forcibly stopping sandbox \"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\""
Feb 13 15:52:49.421397 containerd[1886]: time="2025-02-13T15:52:49.421350233Z" level=info msg="TearDown network for sandbox \"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\" successfully"
Feb 13 15:52:49.432405 containerd[1886]: time="2025-02-13T15:52:49.432346850Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.432564 containerd[1886]: time="2025-02-13T15:52:49.432423675Z" level=info msg="RemovePodSandbox \"1b7670411faa6cf68c804a73872fd6497467aa53750ded9a142cee71b651ba98\" returns successfully"
Feb 13 15:52:49.433121 containerd[1886]: time="2025-02-13T15:52:49.433032572Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\""
Feb 13 15:52:49.433321 containerd[1886]: time="2025-02-13T15:52:49.433248423Z" level=info msg="TearDown network for sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" successfully"
Feb 13 15:52:49.433321 containerd[1886]: time="2025-02-13T15:52:49.433289632Z" level=info msg="StopPodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" returns successfully"
Feb 13 15:52:49.433789 containerd[1886]: time="2025-02-13T15:52:49.433710748Z" level=info msg="RemovePodSandbox for \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\""
Feb 13 15:52:49.433789 containerd[1886]: time="2025-02-13T15:52:49.433772419Z" level=info msg="Forcibly stopping sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\""
Feb 13 15:52:49.433918 containerd[1886]: time="2025-02-13T15:52:49.433863350Z" level=info msg="TearDown network for sandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" successfully"
Feb 13 15:52:49.447880 containerd[1886]: time="2025-02-13T15:52:49.447826830Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.448065 containerd[1886]: time="2025-02-13T15:52:49.448030355Z" level=info msg="RemovePodSandbox \"1ae5461c4625afb3c38a6fe966fc209d106068a4c4df721be2b748a563218aab\" returns successfully"
Feb 13 15:52:49.448646 containerd[1886]: time="2025-02-13T15:52:49.448612949Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\""
Feb 13 15:52:49.448770 containerd[1886]: time="2025-02-13T15:52:49.448752945Z" level=info msg="TearDown network for sandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" successfully"
Feb 13 15:52:49.448835 containerd[1886]: time="2025-02-13T15:52:49.448769958Z" level=info msg="StopPodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" returns successfully"
Feb 13 15:52:49.449337 containerd[1886]: time="2025-02-13T15:52:49.449242743Z" level=info msg="RemovePodSandbox for \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\""
Feb 13 15:52:49.449337 containerd[1886]: time="2025-02-13T15:52:49.449296353Z" level=info msg="Forcibly stopping sandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\""
Feb 13 15:52:49.449575 containerd[1886]: time="2025-02-13T15:52:49.449487036Z" level=info msg="TearDown network for sandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" successfully"
Feb 13 15:52:49.457323 containerd[1886]: time="2025-02-13T15:52:49.457237220Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.457466 containerd[1886]: time="2025-02-13T15:52:49.457335001Z" level=info msg="RemovePodSandbox \"bea23d5ab552da113418346ed6c22a0e12669dc9546ea319eff38985085c6184\" returns successfully"
Feb 13 15:52:49.457908 containerd[1886]: time="2025-02-13T15:52:49.457877547Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\""
Feb 13 15:52:49.458015 containerd[1886]: time="2025-02-13T15:52:49.457990784Z" level=info msg="TearDown network for sandbox \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" successfully"
Feb 13 15:52:49.458077 containerd[1886]: time="2025-02-13T15:52:49.458012880Z" level=info msg="StopPodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" returns successfully"
Feb 13 15:52:49.458347 containerd[1886]: time="2025-02-13T15:52:49.458317164Z" level=info msg="RemovePodSandbox for \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\""
Feb 13 15:52:49.458428 containerd[1886]: time="2025-02-13T15:52:49.458348369Z" level=info msg="Forcibly stopping sandbox \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\""
Feb 13 15:52:49.458472 containerd[1886]: time="2025-02-13T15:52:49.458427533Z" level=info msg="TearDown network for sandbox \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" successfully"
Feb 13 15:52:49.476609 containerd[1886]: time="2025-02-13T15:52:49.476549109Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.477062 containerd[1886]: time="2025-02-13T15:52:49.476627883Z" level=info msg="RemovePodSandbox \"be4aba080514f72fe1c85543e31dfaed2fa1e3dbc1753d07d0e2547f95c078e5\" returns successfully"
Feb 13 15:52:49.477744 containerd[1886]: time="2025-02-13T15:52:49.477680482Z" level=info msg="StopPodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\""
Feb 13 15:52:49.478676 containerd[1886]: time="2025-02-13T15:52:49.478643685Z" level=info msg="TearDown network for sandbox \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" successfully"
Feb 13 15:52:49.478676 containerd[1886]: time="2025-02-13T15:52:49.478671834Z" level=info msg="StopPodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" returns successfully"
Feb 13 15:52:49.479877 containerd[1886]: time="2025-02-13T15:52:49.479823870Z" level=info msg="RemovePodSandbox for \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\""
Feb 13 15:52:49.479877 containerd[1886]: time="2025-02-13T15:52:49.479858598Z" level=info msg="Forcibly stopping sandbox \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\""
Feb 13 15:52:49.480379 containerd[1886]: time="2025-02-13T15:52:49.480242960Z" level=info msg="TearDown network for sandbox \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" successfully"
Feb 13 15:52:49.497511 containerd[1886]: time="2025-02-13T15:52:49.497387966Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.497511 containerd[1886]: time="2025-02-13T15:52:49.497510480Z" level=info msg="RemovePodSandbox \"5aa9d20bb779a7797754e197bf01e65cddf9b5d18a423f858dd9bf33813c3079\" returns successfully"
Feb 13 15:52:49.498428 containerd[1886]: time="2025-02-13T15:52:49.498377220Z" level=info msg="StopPodSandbox for \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\""
Feb 13 15:52:49.498612 containerd[1886]: time="2025-02-13T15:52:49.498581590Z" level=info msg="TearDown network for sandbox \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\" successfully"
Feb 13 15:52:49.498612 containerd[1886]: time="2025-02-13T15:52:49.498601193Z" level=info msg="StopPodSandbox for \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\" returns successfully"
Feb 13 15:52:49.499640 containerd[1886]: time="2025-02-13T15:52:49.499422132Z" level=info msg="RemovePodSandbox for \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\""
Feb 13 15:52:49.499640 containerd[1886]: time="2025-02-13T15:52:49.499538298Z" level=info msg="Forcibly stopping sandbox \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\""
Feb 13 15:52:49.500291 containerd[1886]: time="2025-02-13T15:52:49.499656080Z" level=info msg="TearDown network for sandbox \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\" successfully"
Feb 13 15:52:49.514711 containerd[1886]: time="2025-02-13T15:52:49.514660774Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.516028 containerd[1886]: time="2025-02-13T15:52:49.515333751Z" level=info msg="RemovePodSandbox \"6191e4c7b317aa276433edee4ef685e4d236f22124a2d95630ddecc460f6e1b2\" returns successfully"
Feb 13 15:52:49.516539 containerd[1886]: time="2025-02-13T15:52:49.516246028Z" level=info msg="StopPodSandbox for \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\""
Feb 13 15:52:49.516539 containerd[1886]: time="2025-02-13T15:52:49.516345077Z" level=info msg="TearDown network for sandbox \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\" successfully"
Feb 13 15:52:49.516539 containerd[1886]: time="2025-02-13T15:52:49.516356140Z" level=info msg="StopPodSandbox for \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\" returns successfully"
Feb 13 15:52:49.518748 containerd[1886]: time="2025-02-13T15:52:49.518549846Z" level=info msg="RemovePodSandbox for \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\""
Feb 13 15:52:49.518748 containerd[1886]: time="2025-02-13T15:52:49.518594833Z" level=info msg="Forcibly stopping sandbox \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\""
Feb 13 15:52:49.519583 containerd[1886]: time="2025-02-13T15:52:49.519284710Z" level=info msg="TearDown network for sandbox \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\" successfully"
Feb 13 15:52:49.532678 containerd[1886]: time="2025-02-13T15:52:49.532055069Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.532678 containerd[1886]: time="2025-02-13T15:52:49.532354379Z" level=info msg="RemovePodSandbox \"51a4e8d84a713eb055931dee9e1635ba953da5e5c5549afc0a72a21f00bd6f74\" returns successfully"
Feb 13 15:52:49.539536 containerd[1886]: time="2025-02-13T15:52:49.533036097Z" level=info msg="StopPodSandbox for \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\""
Feb 13 15:52:49.539536 containerd[1886]: time="2025-02-13T15:52:49.533163397Z" level=info msg="TearDown network for sandbox \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\" successfully"
Feb 13 15:52:49.539536 containerd[1886]: time="2025-02-13T15:52:49.533178834Z" level=info msg="StopPodSandbox for \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\" returns successfully"
Feb 13 15:52:49.539536 containerd[1886]: time="2025-02-13T15:52:49.533771314Z" level=info msg="RemovePodSandbox for \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\""
Feb 13 15:52:49.539536 containerd[1886]: time="2025-02-13T15:52:49.533817578Z" level=info msg="Forcibly stopping sandbox \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\""
Feb 13 15:52:49.539536 containerd[1886]: time="2025-02-13T15:52:49.533905741Z" level=info msg="TearDown network for sandbox \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\" successfully"
Feb 13 15:52:49.543578 containerd[1886]: time="2025-02-13T15:52:49.543529712Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.543701 containerd[1886]: time="2025-02-13T15:52:49.543608081Z" level=info msg="RemovePodSandbox \"c5bd3bc1728cf63e82f00e9e7248588b465f9dcb3349f8e20f8c7382985f4ca2\" returns successfully"
Feb 13 15:52:49.544496 containerd[1886]: time="2025-02-13T15:52:49.544376169Z" level=info msg="StopPodSandbox for \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\""
Feb 13 15:52:49.544706 containerd[1886]: time="2025-02-13T15:52:49.544678911Z" level=info msg="TearDown network for sandbox \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\" successfully"
Feb 13 15:52:49.544706 containerd[1886]: time="2025-02-13T15:52:49.544701105Z" level=info msg="StopPodSandbox for \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\" returns successfully"
Feb 13 15:52:49.546714 containerd[1886]: time="2025-02-13T15:52:49.545070036Z" level=info msg="RemovePodSandbox for \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\""
Feb 13 15:52:49.546714 containerd[1886]: time="2025-02-13T15:52:49.545097162Z" level=info msg="Forcibly stopping sandbox \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\""
Feb 13 15:52:49.546714 containerd[1886]: time="2025-02-13T15:52:49.545185671Z" level=info msg="TearDown network for sandbox \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\" successfully"
Feb 13 15:52:49.569326 containerd[1886]: time="2025-02-13T15:52:49.568168767Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.569326 containerd[1886]: time="2025-02-13T15:52:49.568248305Z" level=info msg="RemovePodSandbox \"def0ba93c25e711503b7073aa6b5034038745d3939da985a86680b094839fb63\" returns successfully"
Feb 13 15:52:49.569987 containerd[1886]: time="2025-02-13T15:52:49.569930962Z" level=info msg="StopPodSandbox for \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\""
Feb 13 15:52:49.570102 containerd[1886]: time="2025-02-13T15:52:49.570081479Z" level=info msg="TearDown network for sandbox \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\" successfully"
Feb 13 15:52:49.570102 containerd[1886]: time="2025-02-13T15:52:49.570097988Z" level=info msg="StopPodSandbox for \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\" returns successfully"
Feb 13 15:52:49.570451 containerd[1886]: time="2025-02-13T15:52:49.570422300Z" level=info msg="RemovePodSandbox for \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\""
Feb 13 15:52:49.570533 containerd[1886]: time="2025-02-13T15:52:49.570450762Z" level=info msg="Forcibly stopping sandbox \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\""
Feb 13 15:52:49.570578 containerd[1886]: time="2025-02-13T15:52:49.570526473Z" level=info msg="TearDown network for sandbox \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\" successfully"
Feb 13 15:52:49.586347 containerd[1886]: time="2025-02-13T15:52:49.584054149Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.586347 containerd[1886]: time="2025-02-13T15:52:49.584138971Z" level=info msg="RemovePodSandbox \"412cd7b377ee262c0b2471c39c61b5775a87ed9528d6bcd4bcfef20d29d49119\" returns successfully"
Feb 13 15:52:49.586763 containerd[1886]: time="2025-02-13T15:52:49.586600299Z" level=info msg="StopPodSandbox for \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\""
Feb 13 15:52:49.586842 containerd[1886]: time="2025-02-13T15:52:49.586735457Z" level=info msg="TearDown network for sandbox \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\" successfully"
Feb 13 15:52:49.586842 containerd[1886]: time="2025-02-13T15:52:49.586796133Z" level=info msg="StopPodSandbox for \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\" returns successfully"
Feb 13 15:52:49.588733 containerd[1886]: time="2025-02-13T15:52:49.587782366Z" level=info msg="RemovePodSandbox for \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\""
Feb 13 15:52:49.588733 containerd[1886]: time="2025-02-13T15:52:49.587817584Z" level=info msg="Forcibly stopping sandbox \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\""
Feb 13 15:52:49.588733 containerd[1886]: time="2025-02-13T15:52:49.587906137Z" level=info msg="TearDown network for sandbox \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\" successfully"
Feb 13 15:52:49.625538 containerd[1886]: time="2025-02-13T15:52:49.625367182Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.625538 containerd[1886]: time="2025-02-13T15:52:49.625450510Z" level=info msg="RemovePodSandbox \"66f0ed573cb8754dc7778bd8085ccead996ca7fe1024866421c2dba30ead42ef\" returns successfully"
Feb 13 15:52:49.626132 containerd[1886]: time="2025-02-13T15:52:49.626085917Z" level=info msg="StopPodSandbox for \"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\""
Feb 13 15:52:49.626310 containerd[1886]: time="2025-02-13T15:52:49.626275740Z" level=info msg="TearDown network for sandbox \"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\" successfully"
Feb 13 15:52:49.626310 containerd[1886]: time="2025-02-13T15:52:49.626298243Z" level=info msg="StopPodSandbox for \"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\" returns successfully"
Feb 13 15:52:49.627283 containerd[1886]: time="2025-02-13T15:52:49.627252311Z" level=info msg="RemovePodSandbox for \"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\""
Feb 13 15:52:49.627361 containerd[1886]: time="2025-02-13T15:52:49.627286788Z" level=info msg="Forcibly stopping sandbox \"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\""
Feb 13 15:52:49.627421 containerd[1886]: time="2025-02-13T15:52:49.627375458Z" level=info msg="TearDown network for sandbox \"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\" successfully"
Feb 13 15:52:49.643955 containerd[1886]: time="2025-02-13T15:52:49.643893380Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:52:49.644672 containerd[1886]: time="2025-02-13T15:52:49.643971059Z" level=info msg="RemovePodSandbox \"7e7d19df2b3b9f8383af2e2450bd0974e36d7a095ce5ae8f0b8dbe9b7cbbe8a4\" returns successfully"
Feb 13 15:52:49.767040 systemd[1]: run-containerd-runc-k8s.io-c7dc4bb6dc6953d247461e0a4b868442236849c37982830fded6147c5e5f9970-runc.VRGWr4.mount: Deactivated successfully.
Feb 13 15:52:49.900181 kubelet[2377]: I0213 15:52:49.900050    2377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=7.204755963 podStartE2EDuration="14.900012939s" podCreationTimestamp="2025-02-13 15:52:35 +0000 UTC" firstStartedPulling="2025-02-13 15:52:35.957820573 +0000 UTC m=+47.918806549" lastFinishedPulling="2025-02-13 15:52:43.653077552 +0000 UTC m=+55.614063525" observedRunningTime="2025-02-13 15:52:44.008838423 +0000 UTC m=+55.969824407" watchObservedRunningTime="2025-02-13 15:52:49.900012939 +0000 UTC m=+61.860998922"
Feb 13 15:52:50.103198 kubelet[2377]: E0213 15:52:50.103157    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:51.104126 kubelet[2377]: E0213 15:52:51.104054    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:52.104911 kubelet[2377]: E0213 15:52:52.104850    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:53.105710 kubelet[2377]: E0213 15:52:53.105652    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:54.106742 kubelet[2377]: E0213 15:52:54.106660    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:55.107514 kubelet[2377]: E0213 15:52:55.107458    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:56.108058 kubelet[2377]: E0213 15:52:56.107897    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:57.108984 kubelet[2377]: E0213 15:52:57.108319    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:58.109012 kubelet[2377]: E0213 15:52:58.108943    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:52:59.109363 kubelet[2377]: E0213 15:52:59.109306    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:00.109796 kubelet[2377]: E0213 15:53:00.109708    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:01.110450 kubelet[2377]: E0213 15:53:01.110242    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:02.110817 kubelet[2377]: E0213 15:53:02.110753    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:03.111454 kubelet[2377]: E0213 15:53:03.111396    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:04.112515 kubelet[2377]: E0213 15:53:04.112453    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:05.114189 kubelet[2377]: E0213 15:53:05.113702    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:06.114653 kubelet[2377]: E0213 15:53:06.114596    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:07.114979 kubelet[2377]: E0213 15:53:07.114923    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:08.115679 kubelet[2377]: E0213 15:53:08.115619    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:08.421148 kubelet[2377]: I0213 15:53:08.420863    2377 topology_manager.go:215] "Topology Admit Handler" podUID="a326cb48-f86e-4f0b-a04c-19d3707d8f4e" podNamespace="default" podName="test-pod-1"
Feb 13 15:53:08.437476 systemd[1]: Created slice kubepods-besteffort-poda326cb48_f86e_4f0b_a04c_19d3707d8f4e.slice - libcontainer container kubepods-besteffort-poda326cb48_f86e_4f0b_a04c_19d3707d8f4e.slice.
Feb 13 15:53:08.545191 kubelet[2377]: I0213 15:53:08.545079    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5224117c-0529-42aa-b0fd-6e834398f85d\" (UniqueName: \"kubernetes.io/nfs/a326cb48-f86e-4f0b-a04c-19d3707d8f4e-pvc-5224117c-0529-42aa-b0fd-6e834398f85d\") pod \"test-pod-1\" (UID: \"a326cb48-f86e-4f0b-a04c-19d3707d8f4e\") " pod="default/test-pod-1"
Feb 13 15:53:08.545191 kubelet[2377]: I0213 15:53:08.545184    2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh5mq\" (UniqueName: \"kubernetes.io/projected/a326cb48-f86e-4f0b-a04c-19d3707d8f4e-kube-api-access-dh5mq\") pod \"test-pod-1\" (UID: \"a326cb48-f86e-4f0b-a04c-19d3707d8f4e\") " pod="default/test-pod-1"
Feb 13 15:53:08.758856 kernel: FS-Cache: Loaded
Feb 13 15:53:08.943806 kernel: RPC: Registered named UNIX socket transport module.
Feb 13 15:53:08.943944 kernel: RPC: Registered udp transport module.
Feb 13 15:53:08.944019 kernel: RPC: Registered tcp transport module.
Feb 13 15:53:08.944848 kernel: RPC: Registered tcp-with-tls transport module.
Feb 13 15:53:08.945873 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb 13 15:53:09.024364 kubelet[2377]: E0213 15:53:09.024279    2377 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:09.117453 kubelet[2377]: E0213 15:53:09.116196    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:09.385396 kernel: NFS: Registering the id_resolver key type
Feb 13 15:53:09.385553 kernel: Key type id_resolver registered
Feb 13 15:53:09.385594 kernel: Key type id_legacy registered
Feb 13 15:53:09.446183 nfsidmap[4630]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal'
Feb 13 15:53:09.450862 nfsidmap[4631]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal'
Feb 13 15:53:09.649948 containerd[1886]: time="2025-02-13T15:53:09.649544801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a326cb48-f86e-4f0b-a04c-19d3707d8f4e,Namespace:default,Attempt:0,}"
Feb 13 15:53:09.894075 (udev-worker)[4627]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:53:09.902510 systemd-networkd[1728]: cali5ec59c6bf6e: Link UP
Feb 13 15:53:09.910251 systemd-networkd[1728]: cali5ec59c6bf6e: Gained carrier
Feb 13 15:53:09.948310 containerd[1886]: 2025-02-13 15:53:09.725 [INFO][4633] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.28.66-k8s-test--pod--1-eth0  default  a326cb48-f86e-4f0b-a04c-19d3707d8f4e 1345 0 2025-02-13 15:52:36 +0000 UTC <nil> <nil> map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s  172.31.28.66  test-pod-1 eth0 default [] []   [kns.default ksa.default.default] cali5ec59c6bf6e  [] []}} ContainerID="542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.66-k8s-test--pod--1-"
Feb 13 15:53:09.948310 containerd[1886]: 2025-02-13 15:53:09.725 [INFO][4633] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.66-k8s-test--pod--1-eth0"
Feb 13 15:53:09.948310 containerd[1886]: 2025-02-13 15:53:09.788 [INFO][4643] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58" HandleID="k8s-pod-network.542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58" Workload="172.31.28.66-k8s-test--pod--1-eth0"
Feb 13 15:53:09.948310 containerd[1886]: 2025-02-13 15:53:09.801 [INFO][4643] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58" HandleID="k8s-pod-network.542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58" Workload="172.31.28.66-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fe0d0), Attrs:map[string]string{"namespace":"default", "node":"172.31.28.66", "pod":"test-pod-1", "timestamp":"2025-02-13 15:53:09.788048137 +0000 UTC"}, Hostname:"172.31.28.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 15:53:09.948310 containerd[1886]: 2025-02-13 15:53:09.801 [INFO][4643] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 15:53:09.948310 containerd[1886]: 2025-02-13 15:53:09.802 [INFO][4643] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 15:53:09.948310 containerd[1886]: 2025-02-13 15:53:09.802 [INFO][4643] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.28.66'
Feb 13 15:53:09.948310 containerd[1886]: 2025-02-13 15:53:09.808 [INFO][4643] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58" host="172.31.28.66"
Feb 13 15:53:09.948310 containerd[1886]: 2025-02-13 15:53:09.826 [INFO][4643] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.28.66"
Feb 13 15:53:09.948310 containerd[1886]: 2025-02-13 15:53:09.837 [INFO][4643] ipam/ipam.go 489: Trying affinity for 192.168.66.64/26 host="172.31.28.66"
Feb 13 15:53:09.948310 containerd[1886]: 2025-02-13 15:53:09.840 [INFO][4643] ipam/ipam.go 155: Attempting to load block cidr=192.168.66.64/26 host="172.31.28.66"
Feb 13 15:53:09.948310 containerd[1886]: 2025-02-13 15:53:09.845 [INFO][4643] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="172.31.28.66"
Feb 13 15:53:09.948310 containerd[1886]: 2025-02-13 15:53:09.845 [INFO][4643] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58" host="172.31.28.66"
Feb 13 15:53:09.948310 containerd[1886]: 2025-02-13 15:53:09.849 [INFO][4643] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58
Feb 13 15:53:09.948310 containerd[1886]: 2025-02-13 15:53:09.859 [INFO][4643] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58" host="172.31.28.66"
Feb 13 15:53:09.948310 containerd[1886]: 2025-02-13 15:53:09.870 [INFO][4643] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.66.68/26] block=192.168.66.64/26 handle="k8s-pod-network.542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58" host="172.31.28.66"
Feb 13 15:53:09.948310 containerd[1886]: 2025-02-13 15:53:09.870 [INFO][4643] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.66.68/26] handle="k8s-pod-network.542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58" host="172.31.28.66"
Feb 13 15:53:09.948310 containerd[1886]: 2025-02-13 15:53:09.870 [INFO][4643] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 15:53:09.948310 containerd[1886]: 2025-02-13 15:53:09.870 [INFO][4643] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.68/26] IPv6=[] ContainerID="542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58" HandleID="k8s-pod-network.542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58" Workload="172.31.28.66-k8s-test--pod--1-eth0"
Feb 13 15:53:09.948310 containerd[1886]: 2025-02-13 15:53:09.874 [INFO][4633] cni-plugin/k8s.go 386: Populated endpoint ContainerID="542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.66-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.66-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a326cb48-f86e-4f0b-a04c-19d3707d8f4e", ResourceVersion:"1345", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 52, 36, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.66", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.66.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:53:09.953088 containerd[1886]: 2025-02-13 15:53:09.879 [INFO][4633] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.66.68/32] ContainerID="542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.66-k8s-test--pod--1-eth0"
Feb 13 15:53:09.953088 containerd[1886]: 2025-02-13 15:53:09.879 [INFO][4633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.66-k8s-test--pod--1-eth0"
Feb 13 15:53:09.953088 containerd[1886]: 2025-02-13 15:53:09.913 [INFO][4633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.66-k8s-test--pod--1-eth0"
Feb 13 15:53:09.953088 containerd[1886]: 2025-02-13 15:53:09.914 [INFO][4633] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.66-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.66-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a326cb48-f86e-4f0b-a04c-19d3707d8f4e", ResourceVersion:"1345", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 52, 36, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.66", ContainerID:"542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.66.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"16:a9:2f:6a:1f:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:53:09.953088 containerd[1886]: 2025-02-13 15:53:09.929 [INFO][4633] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.66-k8s-test--pod--1-eth0"
Feb 13 15:53:10.044945 containerd[1886]: time="2025-02-13T15:53:10.044813369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:53:10.044945 containerd[1886]: time="2025-02-13T15:53:10.044880464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:53:10.044945 containerd[1886]: time="2025-02-13T15:53:10.044896589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:53:10.047334 containerd[1886]: time="2025-02-13T15:53:10.045006455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:53:10.116352 kubelet[2377]: E0213 15:53:10.116315    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:10.127019 systemd[1]: Started cri-containerd-542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58.scope - libcontainer container 542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58.
Feb 13 15:53:10.273565 containerd[1886]: time="2025-02-13T15:53:10.273521487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a326cb48-f86e-4f0b-a04c-19d3707d8f4e,Namespace:default,Attempt:0,} returns sandbox id \"542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58\""
Feb 13 15:53:10.279849 containerd[1886]: time="2025-02-13T15:53:10.279689429Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb 13 15:53:10.734120 containerd[1886]: time="2025-02-13T15:53:10.733959467Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61"
Feb 13 15:53:10.745608 containerd[1886]: time="2025-02-13T15:53:10.745213481Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 465.4395ms"
Feb 13 15:53:10.745608 containerd[1886]: time="2025-02-13T15:53:10.745268757Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\""
Feb 13 15:53:10.755767 containerd[1886]: time="2025-02-13T15:53:10.755269193Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:53:10.764473 containerd[1886]: time="2025-02-13T15:53:10.764410177Z" level=info msg="CreateContainer within sandbox \"542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58\" for container &ContainerMetadata{Name:test,Attempt:0,}"
Feb 13 15:53:10.835915 containerd[1886]: time="2025-02-13T15:53:10.835762831Z" level=info msg="CreateContainer within sandbox \"542c6a1420dd016be56a24a972c06d5f26f2de89bb6eeb54045527e04bdc0c58\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"07efbe91aa591f05112d351f2de1dfe8ac411a50392c0d60888157fb7a80cf06\""
Feb 13 15:53:10.843782 containerd[1886]: time="2025-02-13T15:53:10.843696899Z" level=info msg="StartContainer for \"07efbe91aa591f05112d351f2de1dfe8ac411a50392c0d60888157fb7a80cf06\""
Feb 13 15:53:10.999685 systemd[1]: Started cri-containerd-07efbe91aa591f05112d351f2de1dfe8ac411a50392c0d60888157fb7a80cf06.scope - libcontainer container 07efbe91aa591f05112d351f2de1dfe8ac411a50392c0d60888157fb7a80cf06.
Feb 13 15:53:11.082399 containerd[1886]: time="2025-02-13T15:53:11.081986410Z" level=info msg="StartContainer for \"07efbe91aa591f05112d351f2de1dfe8ac411a50392c0d60888157fb7a80cf06\" returns successfully"
Feb 13 15:53:11.117222 kubelet[2377]: E0213 15:53:11.117162    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:11.422245 systemd-networkd[1728]: cali5ec59c6bf6e: Gained IPv6LL
Feb 13 15:53:12.118065 kubelet[2377]: E0213 15:53:12.118001    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:13.118430 kubelet[2377]: E0213 15:53:13.118381    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:13.656233 ntpd[1865]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123
Feb 13 15:53:13.656695 ntpd[1865]: 13 Feb 15:53:13 ntpd[1865]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123
Feb 13 15:53:14.118949 kubelet[2377]: E0213 15:53:14.118899    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:15.119166 kubelet[2377]: E0213 15:53:15.119108    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:16.119397 kubelet[2377]: E0213 15:53:16.119339    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:17.120517 kubelet[2377]: E0213 15:53:17.120456    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:18.121267 kubelet[2377]: E0213 15:53:18.121206    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:19.121894 kubelet[2377]: E0213 15:53:19.121837    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:20.123079 kubelet[2377]: E0213 15:53:20.123021    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:21.124231 kubelet[2377]: E0213 15:53:21.124171    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:22.124485 kubelet[2377]: E0213 15:53:22.124426    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:23.124979 kubelet[2377]: E0213 15:53:23.124922    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:24.126173 kubelet[2377]: E0213 15:53:24.126116    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:25.126386 kubelet[2377]: E0213 15:53:25.126320    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:26.126877 kubelet[2377]: E0213 15:53:26.126700    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:27.127356 kubelet[2377]: E0213 15:53:27.127295    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:28.127536 kubelet[2377]: E0213 15:53:28.127475    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:29.024122 kubelet[2377]: E0213 15:53:29.024066    2377 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:29.128014 kubelet[2377]: E0213 15:53:29.127954    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:30.129018 kubelet[2377]: E0213 15:53:30.128929    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:31.129258 kubelet[2377]: E0213 15:53:31.129201    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:31.326202 kubelet[2377]: E0213 15:53:31.326114    2377 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 172.31.28.66)"
Feb 13 15:53:32.129509 kubelet[2377]: E0213 15:53:32.129453    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:33.129870 kubelet[2377]: E0213 15:53:33.129767    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:34.130776 kubelet[2377]: E0213 15:53:34.130576    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:35.131353 kubelet[2377]: E0213 15:53:35.131295    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:36.132149 kubelet[2377]: E0213 15:53:36.132093    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:37.132830 kubelet[2377]: E0213 15:53:37.132774    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:38.133193 kubelet[2377]: E0213 15:53:38.133135    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:39.134039 kubelet[2377]: E0213 15:53:39.133916    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:40.134589 kubelet[2377]: E0213 15:53:40.134535    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:41.135341 kubelet[2377]: E0213 15:53:41.135281    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:41.327599 kubelet[2377]: E0213 15:53:41.327530    2377 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.66?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Feb 13 15:53:42.136007 kubelet[2377]: E0213 15:53:42.135944    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:43.137097 kubelet[2377]: E0213 15:53:43.137032    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:44.137785 kubelet[2377]: E0213 15:53:44.137713    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:45.138687 kubelet[2377]: E0213 15:53:45.138628    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:46.139834 kubelet[2377]: E0213 15:53:46.139773    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:47.140582 kubelet[2377]: E0213 15:53:47.140524    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:48.141021 kubelet[2377]: E0213 15:53:48.140961    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:49.024645 kubelet[2377]: E0213 15:53:49.024587    2377 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:49.141476 kubelet[2377]: E0213 15:53:49.141425    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:50.142332 kubelet[2377]: E0213 15:53:50.142291    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:51.142511 kubelet[2377]: E0213 15:53:51.142443    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:51.328378 kubelet[2377]: E0213 15:53:51.328317    2377 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.66?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Feb 13 15:53:52.142903 kubelet[2377]: E0213 15:53:52.142828    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:53.143883 kubelet[2377]: E0213 15:53:53.143819    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:54.144995 kubelet[2377]: E0213 15:53:54.144932    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:55.145791 kubelet[2377]: E0213 15:53:55.145718    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:56.146578 kubelet[2377]: E0213 15:53:56.146516    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:57.147177 kubelet[2377]: E0213 15:53:57.147135    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:58.147817 kubelet[2377]: E0213 15:53:58.147757    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:53:59.148406 kubelet[2377]: E0213 15:53:59.148343    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:00.149262 kubelet[2377]: E0213 15:54:00.149202    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:01.150315 kubelet[2377]: E0213 15:54:01.150253    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:01.329444 kubelet[2377]: E0213 15:54:01.329367    2377 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.66?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Feb 13 15:54:01.422063 kubelet[2377]: E0213 15:54:01.420310    2377 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.66?timeout=10s\": unexpected EOF"
Feb 13 15:54:01.422063 kubelet[2377]: I0213 15:54:01.420362    2377 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
Feb 13 15:54:01.642477 kubelet[2377]: E0213 15:54:01.642425    2377 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.28.66\": Get \"https://172.31.25.152:6443/api/v1/nodes/172.31.28.66?resourceVersion=0&timeout=10s\": dial tcp 172.31.25.152:6443: connect: connection refused"
Feb 13 15:54:01.643140 kubelet[2377]: E0213 15:54:01.643096    2377 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.28.66\": Get \"https://172.31.25.152:6443/api/v1/nodes/172.31.28.66?timeout=10s\": dial tcp 172.31.25.152:6443: connect: connection refused"
Feb 13 15:54:01.643698 kubelet[2377]: E0213 15:54:01.643668    2377 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.28.66\": Get \"https://172.31.25.152:6443/api/v1/nodes/172.31.28.66?timeout=10s\": dial tcp 172.31.25.152:6443: connect: connection refused"
Feb 13 15:54:01.644991 kubelet[2377]: E0213 15:54:01.644960    2377 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.28.66\": Get \"https://172.31.25.152:6443/api/v1/nodes/172.31.28.66?timeout=10s\": dial tcp 172.31.25.152:6443: connect: connection refused"
Feb 13 15:54:01.646030 kubelet[2377]: E0213 15:54:01.645999    2377 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.28.66\": Get \"https://172.31.25.152:6443/api/v1/nodes/172.31.28.66?timeout=10s\": dial tcp 172.31.25.152:6443: connect: connection refused"
Feb 13 15:54:01.646030 kubelet[2377]: E0213 15:54:01.646029    2377 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
Feb 13 15:54:02.151516 kubelet[2377]: E0213 15:54:02.151452    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:02.484463 kubelet[2377]: E0213 15:54:02.484311    2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.66?timeout=10s\": dial tcp 172.31.25.152:6443: connect: connection refused - error from a previous attempt: read tcp 172.31.28.66:59298->172.31.25.152:6443: read: connection reset by peer" interval="200ms"
Feb 13 15:54:03.152319 kubelet[2377]: E0213 15:54:03.152239    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:04.153411 kubelet[2377]: E0213 15:54:04.153350    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:05.153582 kubelet[2377]: E0213 15:54:05.153521    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:06.154179 kubelet[2377]: E0213 15:54:06.154129    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:07.154872 kubelet[2377]: E0213 15:54:07.154809    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:08.155535 kubelet[2377]: E0213 15:54:08.155475    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:09.023981 kubelet[2377]: E0213 15:54:09.023922    2377 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:09.156052 kubelet[2377]: E0213 15:54:09.155997    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:10.156568 kubelet[2377]: E0213 15:54:10.156504    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:11.157450 kubelet[2377]: E0213 15:54:11.157293    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:12.157980 kubelet[2377]: E0213 15:54:12.157923    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:12.687916 kubelet[2377]: E0213 15:54:12.687864    2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.66?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="400ms"
Feb 13 15:54:13.158915 kubelet[2377]: E0213 15:54:13.158850    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:14.159351 kubelet[2377]: E0213 15:54:14.159289    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:15.160062 kubelet[2377]: E0213 15:54:15.160010    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:16.160799 kubelet[2377]: E0213 15:54:16.160743    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:17.161289 kubelet[2377]: E0213 15:54:17.161225    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:18.162313 kubelet[2377]: E0213 15:54:18.162141    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:19.162975 kubelet[2377]: E0213 15:54:19.162915    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:20.164110 kubelet[2377]: E0213 15:54:20.164064    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:21.167183 kubelet[2377]: E0213 15:54:21.167122    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:54:22.005972 kubelet[2377]: E0213 15:54:22.005916    2377 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.28.66\": Get \"https://172.31.25.152:6443/api/v1/nodes/172.31.28.66?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Feb 13 15:54:22.168177 kubelet[2377]: E0213 15:54:22.168117    2377 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"