Feb 13 15:22:35.161920 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 15:22:35.161965 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 14:02:42 -00 2025 Feb 13 15:22:35.161990 kernel: KASLR disabled due to lack of seed Feb 13 15:22:35.162006 kernel: efi: EFI v2.7 by EDK II Feb 13 15:22:35.162022 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Feb 13 15:22:35.162038 kernel: secureboot: Secure boot disabled Feb 13 15:22:35.162056 kernel: ACPI: Early table checksum verification disabled Feb 13 15:22:35.162072 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 15:22:35.162088 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 15:22:35.162103 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 15:22:35.162123 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 15:22:35.162139 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 15:22:35.162171 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 15:22:35.162192 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 15:22:35.162212 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 15:22:35.164904 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 15:22:35.164926 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 15:22:35.164943 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 15:22:35.164960 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 15:22:35.164977 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 15:22:35.164994 kernel: printk: bootconsole [uart0] enabled Feb 13 15:22:35.165010 kernel: NUMA: Failed to initialise from firmware Feb 13 15:22:35.165027 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 15:22:35.165044 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 15:22:35.165061 kernel: Zone ranges: Feb 13 15:22:35.165077 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 15:22:35.165099 kernel: DMA32 empty Feb 13 15:22:35.165116 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 15:22:35.165133 kernel: Movable zone start for each node Feb 13 15:22:35.165149 kernel: Early memory node ranges Feb 13 15:22:35.165165 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 15:22:35.165182 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 15:22:35.165198 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 15:22:35.165234 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 15:22:35.165255 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 15:22:35.165272 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 15:22:35.165288 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 15:22:35.165305 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 15:22:35.165327 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 15:22:35.165344 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 15:22:35.165369 kernel: psci: probing for conduit method from ACPI. Feb 13 15:22:35.165386 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 15:22:35.165404 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:22:35.165425 kernel: psci: Trusted OS migration not required Feb 13 15:22:35.165443 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:22:35.165460 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:22:35.165478 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:22:35.165496 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:22:35.165513 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:22:35.165530 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:22:35.165548 kernel: CPU features: detected: Spectre-v2 Feb 13 15:22:35.165565 kernel: CPU features: detected: Spectre-v3a Feb 13 15:22:35.165582 kernel: CPU features: detected: Spectre-BHB Feb 13 15:22:35.165599 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 15:22:35.165616 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 15:22:35.165638 kernel: alternatives: applying boot alternatives Feb 13 15:22:35.165658 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:22:35.165677 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:22:35.165695 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:22:35.165713 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:22:35.165730 kernel: Fallback order for Node 0: 0 Feb 13 15:22:35.165748 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 15:22:35.165766 kernel: Policy zone: Normal Feb 13 15:22:35.165783 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:22:35.165801 kernel: software IO TLB: area num 2. Feb 13 15:22:35.165824 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 15:22:35.165842 kernel: Memory: 3819640K/4030464K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 210824K reserved, 0K cma-reserved) Feb 13 15:22:35.165860 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:22:35.165877 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:22:35.165895 kernel: rcu: RCU event tracing is enabled. Feb 13 15:22:35.165913 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:22:35.165931 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:22:35.165949 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:22:35.165968 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:22:35.165985 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:22:35.166002 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:22:35.166024 kernel: GICv3: 96 SPIs implemented Feb 13 15:22:35.166041 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:22:35.166058 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:22:35.166075 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 15:22:35.166092 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 15:22:35.166109 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 15:22:35.166127 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:22:35.166145 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:22:35.166188 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 15:22:35.166210 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 15:22:35.166247 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 15:22:35.166265 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:22:35.166289 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 15:22:35.166307 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 15:22:35.166325 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 15:22:35.166342 kernel: Console: colour dummy device 80x25 Feb 13 15:22:35.166360 kernel: printk: console [tty1] enabled Feb 13 15:22:35.166378 kernel: ACPI: Core revision 20230628 Feb 13 15:22:35.166396 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 15:22:35.166415 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:22:35.166433 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:22:35.166450 kernel: landlock: Up and running. Feb 13 15:22:35.166472 kernel: SELinux: Initializing. Feb 13 15:22:35.166490 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:22:35.166508 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:22:35.166526 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:22:35.166544 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:22:35.166562 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:22:35.166581 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:22:35.166598 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 15:22:35.166620 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 15:22:35.166638 kernel: Remapping and enabling EFI services. Feb 13 15:22:35.166656 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:22:35.166673 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:22:35.166691 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 15:22:35.166709 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 15:22:35.166726 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 15:22:35.166744 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:22:35.166761 kernel: SMP: Total of 2 processors activated. Feb 13 15:22:35.166779 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:22:35.166801 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 15:22:35.166819 kernel: CPU features: detected: CRC32 instructions Feb 13 15:22:35.166849 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:22:35.166872 kernel: alternatives: applying system-wide alternatives Feb 13 15:22:35.166890 kernel: devtmpfs: initialized Feb 13 15:22:35.166908 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:22:35.166927 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:22:35.166945 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:22:35.166963 kernel: SMBIOS 3.0.0 present. Feb 13 15:22:35.166987 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 15:22:35.167006 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:22:35.167032 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:22:35.167051 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:22:35.167070 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:22:35.167090 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:22:35.167110 kernel: audit: type=2000 audit(0.221:1): state=initialized audit_enabled=0 res=1 Feb 13 15:22:35.167134 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:22:35.167153 kernel: cpuidle: using governor menu Feb 13 15:22:35.167172 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:22:35.167191 kernel: ASID allocator initialised with 65536 entries Feb 13 15:22:35.167210 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:22:35.167250 kernel: Serial: AMBA PL011 UART driver Feb 13 15:22:35.167271 kernel: Modules: 17360 pages in range for non-PLT usage Feb 13 15:22:35.167290 kernel: Modules: 508880 pages in range for PLT usage Feb 13 15:22:35.167310 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:22:35.167335 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:22:35.167355 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:22:35.167374 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:22:35.167393 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:22:35.167411 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:22:35.167429 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:22:35.167448 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:22:35.167466 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:22:35.167484 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:22:35.167507 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:22:35.167525 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:22:35.167544 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:22:35.167562 kernel: ACPI: Interpreter enabled Feb 13 15:22:35.167580 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:22:35.167598 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:22:35.167617 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 15:22:35.167968 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:22:35.168203 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:22:35.168449 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:22:35.168652 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 15:22:35.168854 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 15:22:35.168880 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 15:22:35.168900 kernel: acpiphp: Slot [1] registered Feb 13 15:22:35.168923 kernel: acpiphp: Slot [2] registered Feb 13 15:22:35.168964 kernel: acpiphp: Slot [3] registered Feb 13 15:22:35.169036 kernel: acpiphp: Slot [4] registered Feb 13 15:22:35.169064 kernel: acpiphp: Slot [5] registered Feb 13 15:22:35.169084 kernel: acpiphp: Slot [6] registered Feb 13 15:22:35.169103 kernel: acpiphp: Slot [7] registered Feb 13 15:22:35.169121 kernel: acpiphp: Slot [8] registered Feb 13 15:22:35.169140 kernel: acpiphp: Slot [9] registered Feb 13 15:22:35.169158 kernel: acpiphp: Slot [10] registered Feb 13 15:22:35.169177 kernel: acpiphp: Slot [11] registered Feb 13 15:22:35.169196 kernel: acpiphp: Slot [12] registered Feb 13 15:22:35.171287 kernel: acpiphp: Slot [13] registered Feb 13 15:22:35.171341 kernel: acpiphp: Slot [14] registered Feb 13 15:22:35.171361 kernel: acpiphp: Slot [15] registered Feb 13 15:22:35.171380 kernel: acpiphp: Slot [16] registered Feb 13 15:22:35.171399 kernel: acpiphp: Slot [17] registered Feb 13 15:22:35.171417 kernel: acpiphp: Slot [18] registered Feb 13 15:22:35.171436 kernel: acpiphp: Slot [19] registered Feb 13 15:22:35.171454 kernel: acpiphp: Slot [20] registered Feb 13 15:22:35.171473 kernel: acpiphp: Slot [21] registered Feb 13 15:22:35.171491 kernel: acpiphp: Slot [22] registered Feb 13 15:22:35.171514 kernel: acpiphp: Slot [23] registered Feb 13 15:22:35.171532 kernel: acpiphp: Slot [24] registered Feb 13 15:22:35.171551 kernel: acpiphp: Slot [25] registered Feb 13 15:22:35.171569 kernel: acpiphp: Slot [26] registered Feb 13 15:22:35.171587 kernel: acpiphp: Slot [27] registered Feb 13 15:22:35.171606 kernel: acpiphp: Slot [28] registered Feb 13 15:22:35.171624 kernel: acpiphp: Slot [29] registered Feb 13 15:22:35.171644 kernel: acpiphp: Slot [30] registered Feb 13 15:22:35.171734 kernel: acpiphp: Slot [31] registered Feb 13 15:22:35.171755 kernel: PCI host bridge to bus 0000:00 Feb 13 15:22:35.172011 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 15:22:35.172205 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:22:35.174535 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 15:22:35.174748 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 15:22:35.174978 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 15:22:35.177444 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 15:22:35.177712 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 15:22:35.177934 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 15:22:35.178144 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 15:22:35.178417 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 15:22:35.178650 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 15:22:35.178862 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 15:22:35.179070 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 15:22:35.182412 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 15:22:35.182655 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 15:22:35.182885 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 15:22:35.183097 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 15:22:35.183337 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 15:22:35.183548 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 15:22:35.183768 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 15:22:35.183971 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 15:22:35.184155 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:22:35.185568 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 15:22:35.185619 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:22:35.185641 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:22:35.185662 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:22:35.185682 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:22:35.185701 kernel: iommu: Default domain type: Translated Feb 13 15:22:35.185736 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:22:35.185755 kernel: efivars: Registered efivars operations Feb 13 15:22:35.185774 kernel: vgaarb: loaded Feb 13 15:22:35.185793 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:22:35.185812 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:22:35.185834 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:22:35.185854 kernel: pnp: PnP ACPI init Feb 13 15:22:35.186096 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 15:22:35.186132 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:22:35.186174 kernel: NET: Registered PF_INET protocol family Feb 13 15:22:35.186209 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:22:35.186259 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:22:35.187463 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:22:35.187486 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:22:35.187505 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:22:35.187525 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:22:35.187544 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:22:35.187571 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:22:35.187590 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:22:35.187609 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:22:35.187627 kernel: kvm [1]: HYP mode not available Feb 13 15:22:35.187646 kernel: Initialise system trusted keyrings Feb 13 15:22:35.187665 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:22:35.187683 kernel: Key type asymmetric registered Feb 13 15:22:35.187702 kernel: Asymmetric key parser 'x509' registered Feb 13 15:22:35.187720 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:22:35.187743 kernel: io scheduler mq-deadline registered Feb 13 15:22:35.187762 kernel: io scheduler kyber registered Feb 13 15:22:35.187781 kernel: io scheduler bfq registered Feb 13 15:22:35.188050 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 15:22:35.188079 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:22:35.188098 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:22:35.188116 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 15:22:35.188135 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 15:22:35.188159 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:22:35.188179 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 15:22:35.190828 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 15:22:35.190872 kernel: printk: console [ttyS0] disabled Feb 13 15:22:35.190892 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 15:22:35.190911 kernel: printk: console [ttyS0] enabled Feb 13 15:22:35.190930 kernel: printk: bootconsole [uart0] disabled Feb 13 15:22:35.190949 kernel: thunder_xcv, ver 1.0 Feb 13 15:22:35.190967 kernel: thunder_bgx, ver 1.0 Feb 13 15:22:35.190985 kernel: nicpf, ver 1.0 Feb 13 15:22:35.191014 kernel: nicvf, ver 1.0 Feb 13 15:22:35.191263 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:22:35.191475 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:22:34 UTC (1739460154) Feb 13 15:22:35.191501 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:22:35.191521 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 15:22:35.191539 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:22:35.191558 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:22:35.191595 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:22:35.191618 kernel: Segment Routing with IPv6 Feb 13 15:22:35.191637 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:22:35.191656 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:22:35.191674 kernel: Key type dns_resolver registered Feb 13 15:22:35.191693 kernel: registered taskstats version 1 Feb 13 15:22:35.191711 kernel: Loading compiled-in X.509 certificates Feb 13 15:22:35.191729 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 62d673f884efd54b6d6ef802a9b879413c8a346e' Feb 13 15:22:35.191748 kernel: Key type .fscrypt registered Feb 13 15:22:35.191766 kernel: Key type fscrypt-provisioning registered Feb 13 15:22:35.191790 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:22:35.191809 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:22:35.191827 kernel: ima: No architecture policies found Feb 13 15:22:35.191846 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:22:35.191864 kernel: clk: Disabling unused clocks Feb 13 15:22:35.191883 kernel: Freeing unused kernel memory: 39936K Feb 13 15:22:35.191902 kernel: Run /init as init process Feb 13 15:22:35.191921 kernel: with arguments: Feb 13 15:22:35.191942 kernel: /init Feb 13 15:22:35.191967 kernel: with environment: Feb 13 15:22:35.191988 kernel: HOME=/ Feb 13 15:22:35.192008 kernel: TERM=linux Feb 13 15:22:35.192026 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:22:35.192050 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:22:35.192075 systemd[1]: Detected virtualization amazon. Feb 13 15:22:35.192096 systemd[1]: Detected architecture arm64. Feb 13 15:22:35.192122 systemd[1]: Running in initrd. Feb 13 15:22:35.192143 systemd[1]: No hostname configured, using default hostname. Feb 13 15:22:35.192163 systemd[1]: Hostname set to . Feb 13 15:22:35.192184 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:22:35.194261 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:22:35.194314 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:22:35.194336 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:22:35.194359 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:22:35.194391 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:22:35.194413 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:22:35.194434 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:22:35.194459 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:22:35.194480 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:22:35.194501 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:22:35.194522 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:22:35.194547 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:22:35.194568 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:22:35.194589 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:22:35.194609 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:22:35.194629 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:22:35.194650 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:22:35.194671 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:22:35.194691 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:22:35.194712 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:22:35.194737 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:22:35.194758 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:22:35.194778 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:22:35.194798 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:22:35.194819 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:22:35.194840 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:22:35.194860 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:22:35.194881 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:22:35.194905 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:22:35.194926 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:22:35.194947 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:22:35.194967 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:22:35.195051 systemd-journald[251]: Collecting audit messages is disabled. Feb 13 15:22:35.195103 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:22:35.195126 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:22:35.195147 systemd-journald[251]: Journal started Feb 13 15:22:35.195198 systemd-journald[251]: Runtime Journal (/run/log/journal/ec23fd6f937b0c7c00ea92d08af93f69) is 8.0M, max 75.3M, 67.3M free. Feb 13 15:22:35.169495 systemd-modules-load[252]: Inserted module 'overlay' Feb 13 15:22:35.205269 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:22:35.205332 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:22:35.205362 kernel: Bridge firewalling registered Feb 13 15:22:35.206031 systemd-modules-load[252]: Inserted module 'br_netfilter' Feb 13 15:22:35.214648 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:22:35.221282 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:22:35.230384 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:22:35.246727 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:22:35.253444 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:22:35.274523 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:22:35.280896 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:22:35.305138 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:22:35.311929 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:22:35.332810 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:22:35.340883 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:22:35.366294 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:22:35.369986 dracut-cmdline[287]: dracut-dracut-053 Feb 13 15:22:35.382329 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:22:35.395458 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:22:35.459762 systemd-resolved[299]: Positive Trust Anchors: Feb 13 15:22:35.459796 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:22:35.459860 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:22:35.536270 kernel: SCSI subsystem initialized Feb 13 15:22:35.544414 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:22:35.557366 kernel: iscsi: registered transport (tcp) Feb 13 15:22:35.579610 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:22:35.579685 kernel: QLogic iSCSI HBA Driver Feb 13 15:22:35.679249 kernel: random: crng init done Feb 13 15:22:35.678572 systemd-resolved[299]: Defaulting to hostname 'linux'. Feb 13 15:22:35.682636 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:22:35.686758 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:22:35.710308 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:22:35.725518 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:22:35.756463 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:22:35.756539 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:22:35.756566 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:22:35.822276 kernel: raid6: neonx8 gen() 6537 MB/s Feb 13 15:22:35.839249 kernel: raid6: neonx4 gen() 6495 MB/s Feb 13 15:22:35.856247 kernel: raid6: neonx2 gen() 5436 MB/s Feb 13 15:22:35.873248 kernel: raid6: neonx1 gen() 3921 MB/s Feb 13 15:22:35.890247 kernel: raid6: int64x8 gen() 3593 MB/s Feb 13 15:22:35.907247 kernel: raid6: int64x4 gen() 3692 MB/s Feb 13 15:22:35.924248 kernel: raid6: int64x2 gen() 3596 MB/s Feb 13 15:22:35.942010 kernel: raid6: int64x1 gen() 2753 MB/s Feb 13 15:22:35.942043 kernel: raid6: using algorithm neonx8 gen() 6537 MB/s Feb 13 15:22:35.960003 kernel: raid6: .... xor() 4798 MB/s, rmw enabled Feb 13 15:22:35.960042 kernel: raid6: using neon recovery algorithm Feb 13 15:22:35.967251 kernel: xor: measuring software checksum speed Feb 13 15:22:35.968250 kernel: 8regs : 11940 MB/sec Feb 13 15:22:35.969247 kernel: 32regs : 11903 MB/sec Feb 13 15:22:35.971344 kernel: arm64_neon : 8904 MB/sec Feb 13 15:22:35.971387 kernel: xor: using function: 8regs (11940 MB/sec) Feb 13 15:22:36.054298 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:22:36.072182 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:22:36.083594 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:22:36.123780 systemd-udevd[472]: Using default interface naming scheme 'v255'. Feb 13 15:22:36.132260 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:22:36.150483 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:22:36.197981 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Feb 13 15:22:36.253364 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:22:36.265511 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:22:36.381168 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:22:36.394515 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:22:36.439828 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:22:36.443034 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:22:36.446982 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:22:36.449247 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:22:36.471010 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:22:36.508973 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:22:36.596331 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:22:36.596434 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 15:22:36.633840 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 15:22:36.634109 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 15:22:36.634406 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:30:c8:d4:26:85 Feb 13 15:22:36.607861 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:22:36.608082 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:22:36.615431 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:22:36.617737 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:22:36.618021 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:22:36.620234 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:22:36.629683 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:22:36.659251 (udev-worker)[523]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:22:36.664322 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 15:22:36.664410 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 15:22:36.675275 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 15:22:36.688230 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:22:36.688290 kernel: GPT:9289727 != 16777215 Feb 13 15:22:36.688316 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:22:36.688341 kernel: GPT:9289727 != 16777215 Feb 13 15:22:36.688365 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:22:36.688425 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:22:36.696382 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:22:36.702643 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:22:36.743406 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:22:36.797271 kernel: BTRFS: device fsid dbbe73f5-49db-4e16-b023-d47ce63b488f devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (533) Feb 13 15:22:36.809270 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (529) Feb 13 15:22:36.873999 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 15:22:36.929351 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:22:36.946948 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 15:22:36.962626 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 15:22:36.968167 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 15:22:36.987455 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:22:36.999774 disk-uuid[663]: Primary Header is updated. Feb 13 15:22:36.999774 disk-uuid[663]: Secondary Entries is updated. Feb 13 15:22:36.999774 disk-uuid[663]: Secondary Header is updated. Feb 13 15:22:37.011276 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:22:37.022256 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:22:38.032320 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:22:38.033210 disk-uuid[664]: The operation has completed successfully. Feb 13 15:22:38.224941 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:22:38.227291 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:22:38.276679 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:22:38.286241 sh[923]: Success Feb 13 15:22:38.315259 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:22:38.438872 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:22:38.444391 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:22:38.459905 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:22:38.496060 kernel: BTRFS info (device dm-0): first mount of filesystem dbbe73f5-49db-4e16-b023-d47ce63b488f Feb 13 15:22:38.496123 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:22:38.496161 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:22:38.497414 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:22:38.498460 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:22:38.527254 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:22:38.531131 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:22:38.533454 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:22:38.541564 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:22:38.553605 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:22:38.586730 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:22:38.586845 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:22:38.588097 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:22:38.596257 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:22:38.615662 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:22:38.619290 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:22:38.627494 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:22:38.638589 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:22:38.745063 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:22:38.768812 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:22:38.825937 systemd-networkd[1117]: lo: Link UP Feb 13 15:22:38.826301 systemd-networkd[1117]: lo: Gained carrier Feb 13 15:22:38.832209 systemd-networkd[1117]: Enumeration completed Feb 13 15:22:38.833739 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:22:38.836546 systemd[1]: Reached target network.target - Network. Feb 13 15:22:38.837897 ignition[1044]: Ignition 2.20.0 Feb 13 15:22:38.837917 ignition[1044]: Stage: fetch-offline Feb 13 15:22:38.838369 ignition[1044]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:22:38.847182 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:22:38.838393 ignition[1044]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:22:38.847191 systemd-networkd[1117]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:22:38.838810 ignition[1044]: Ignition finished successfully Feb 13 15:22:38.852332 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:22:38.876619 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:22:38.877681 systemd-networkd[1117]: eth0: Link UP Feb 13 15:22:38.877691 systemd-networkd[1117]: eth0: Gained carrier Feb 13 15:22:38.877716 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:22:38.895902 systemd-networkd[1117]: eth0: DHCPv4 address 172.31.20.64/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:22:38.913570 ignition[1125]: Ignition 2.20.0 Feb 13 15:22:38.915083 ignition[1125]: Stage: fetch Feb 13 15:22:38.916691 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:22:38.916732 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:22:38.918424 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:22:38.930531 ignition[1125]: PUT result: OK Feb 13 15:22:38.933558 ignition[1125]: parsed url from cmdline: "" Feb 13 15:22:38.933689 ignition[1125]: no config URL provided Feb 13 15:22:38.933709 ignition[1125]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:22:38.933735 ignition[1125]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:22:38.933767 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:22:38.937360 ignition[1125]: PUT result: OK Feb 13 15:22:38.941998 ignition[1125]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 15:22:38.944730 ignition[1125]: GET result: OK Feb 13 15:22:38.944864 ignition[1125]: parsing config with SHA512: 054ad0839debc2190fbf97ba5c88957806590c496595987f5241391a4130386b32d4f369e6e9795ada4924d9854acd1a84e31cd0ce0ca5156b5cd0a34b201a25 Feb 13 15:22:38.952658 unknown[1125]: fetched base config from "system" Feb 13 15:22:38.952680 unknown[1125]: fetched base config from "system" Feb 13 15:22:38.954082 ignition[1125]: fetch: fetch complete Feb 13 15:22:38.952693 unknown[1125]: fetched user config from "aws" Feb 13 15:22:38.954170 ignition[1125]: fetch: fetch passed Feb 13 15:22:38.962183 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:22:38.954757 ignition[1125]: Ignition finished successfully Feb 13 15:22:38.983440 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:22:39.007670 ignition[1133]: Ignition 2.20.0 Feb 13 15:22:39.008007 ignition[1133]: Stage: kargs Feb 13 15:22:39.008883 ignition[1133]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:22:39.008924 ignition[1133]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:22:39.009077 ignition[1133]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:22:39.012507 ignition[1133]: PUT result: OK Feb 13 15:22:39.021635 ignition[1133]: kargs: kargs passed Feb 13 15:22:39.021730 ignition[1133]: Ignition finished successfully Feb 13 15:22:39.026333 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:22:39.036514 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:22:39.069367 ignition[1139]: Ignition 2.20.0 Feb 13 15:22:39.069397 ignition[1139]: Stage: disks Feb 13 15:22:39.070947 ignition[1139]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:22:39.070973 ignition[1139]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:22:39.071119 ignition[1139]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:22:39.072716 ignition[1139]: PUT result: OK Feb 13 15:22:39.082253 ignition[1139]: disks: disks passed Feb 13 15:22:39.082386 ignition[1139]: Ignition finished successfully Feb 13 15:22:39.087064 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:22:39.089408 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:22:39.093311 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:22:39.096311 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:22:39.099863 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:22:39.101777 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:22:39.119510 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:22:39.167009 systemd-fsck[1147]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:22:39.173687 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:22:39.183428 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:22:39.281281 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 469d244b-00c1-45f4-bce0-c1d88e98a895 r/w with ordered data mode. Quota mode: none. Feb 13 15:22:39.282518 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:22:39.286701 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:22:39.303405 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:22:39.319776 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:22:39.324919 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:22:39.325029 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:22:39.325091 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:22:39.339959 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:22:39.361252 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1166) Feb 13 15:22:39.357671 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:22:39.371846 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:22:39.371888 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:22:39.371915 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:22:39.385680 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:22:39.390092 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:22:39.467260 initrd-setup-root[1190]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:22:39.478488 initrd-setup-root[1197]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:22:39.487500 initrd-setup-root[1204]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:22:39.495919 initrd-setup-root[1211]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:22:39.664171 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:22:39.684558 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:22:39.691761 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:22:39.710211 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:22:39.712481 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:22:39.754474 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:22:39.762274 ignition[1283]: INFO : Ignition 2.20.0 Feb 13 15:22:39.762274 ignition[1283]: INFO : Stage: mount Feb 13 15:22:39.765694 ignition[1283]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:22:39.765694 ignition[1283]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:22:39.765694 ignition[1283]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:22:39.772396 ignition[1283]: INFO : PUT result: OK Feb 13 15:22:39.776790 ignition[1283]: INFO : mount: mount passed Feb 13 15:22:39.776790 ignition[1283]: INFO : Ignition finished successfully Feb 13 15:22:39.784296 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:22:39.803532 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:22:39.832102 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:22:39.854257 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1294) Feb 13 15:22:39.858456 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:22:39.858515 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:22:39.858542 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:22:39.864248 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:22:39.867990 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:22:39.902708 ignition[1311]: INFO : Ignition 2.20.0 Feb 13 15:22:39.902708 ignition[1311]: INFO : Stage: files Feb 13 15:22:39.906189 ignition[1311]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:22:39.906189 ignition[1311]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:22:39.906189 ignition[1311]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:22:39.912717 ignition[1311]: INFO : PUT result: OK Feb 13 15:22:39.916977 ignition[1311]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:22:39.923559 ignition[1311]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:22:39.923559 ignition[1311]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:22:39.935251 ignition[1311]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:22:39.938171 ignition[1311]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:22:39.941503 unknown[1311]: wrote ssh authorized keys file for user: core Feb 13 15:22:39.943895 ignition[1311]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:22:39.947830 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:22:39.947830 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:22:39.954125 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:22:39.954125 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:22:39.954125 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:22:39.954125 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:22:39.954125 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:22:39.954125 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 15:22:40.153456 systemd-networkd[1117]: eth0: Gained IPv6LL Feb 13 15:22:40.450818 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 15:22:40.846246 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:22:40.850490 ignition[1311]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:22:40.850490 ignition[1311]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:22:40.850490 ignition[1311]: INFO : files: files passed Feb 13 15:22:40.850490 ignition[1311]: INFO : Ignition finished successfully Feb 13 15:22:40.854197 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:22:40.881613 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:22:40.890019 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:22:40.895346 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:22:40.897373 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:22:40.931247 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:22:40.931247 initrd-setup-root-after-ignition[1339]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:22:40.938793 initrd-setup-root-after-ignition[1343]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:22:40.945347 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:22:40.948558 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:22:40.967483 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:22:41.014765 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:22:41.015141 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:22:41.022887 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:22:41.024845 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:22:41.028615 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:22:41.043618 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:22:41.069425 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:22:41.084481 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:22:41.108342 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:22:41.112657 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:22:41.113978 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:22:41.114840 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:22:41.115139 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:22:41.115984 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:22:41.116389 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:22:41.116865 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:22:41.117156 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:22:41.117740 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:22:41.118054 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:22:41.118600 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:22:41.119388 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:22:41.119959 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:22:41.120832 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:22:41.121045 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:22:41.121365 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:22:41.122268 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:22:41.122937 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:22:41.123448 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:22:41.141226 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:22:41.225565 ignition[1363]: INFO : Ignition 2.20.0 Feb 13 15:22:41.225565 ignition[1363]: INFO : Stage: umount Feb 13 15:22:41.225565 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:22:41.225565 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:22:41.225565 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:22:41.225565 ignition[1363]: INFO : PUT result: OK Feb 13 15:22:41.145883 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:22:41.146264 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:22:41.148806 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:22:41.149107 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:22:41.151726 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:22:41.152023 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:22:41.171400 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:22:41.230762 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:22:41.253103 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:22:41.253587 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:22:41.259798 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:22:41.260069 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:22:41.272437 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:22:41.273073 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:22:41.281433 ignition[1363]: INFO : umount: umount passed Feb 13 15:22:41.283075 ignition[1363]: INFO : Ignition finished successfully Feb 13 15:22:41.284457 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:22:41.284647 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:22:41.291504 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:22:41.293341 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:22:41.297668 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:22:41.297785 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:22:41.299775 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:22:41.299860 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:22:41.303464 systemd[1]: Stopped target network.target - Network. Feb 13 15:22:41.309179 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:22:41.310143 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:22:41.316895 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:22:41.336627 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:22:41.337353 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:22:41.342021 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:22:41.348552 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:22:41.350422 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:22:41.350509 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:22:41.352393 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:22:41.352465 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:22:41.354419 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:22:41.354510 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:22:41.356435 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:22:41.356512 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:22:41.358783 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:22:41.360790 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:22:41.364825 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:22:41.389817 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:22:41.391887 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:22:41.393929 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:22:41.394016 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:22:41.403488 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:22:41.404367 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:22:41.408458 systemd-networkd[1117]: eth0: DHCPv6 lease lost Feb 13 15:22:41.414290 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:22:41.414432 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:22:41.420820 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:22:41.421057 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:22:41.425125 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:22:41.426358 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:22:41.445941 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:22:41.447741 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:22:41.448678 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:22:41.456118 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:22:41.456254 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:22:41.458865 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:22:41.458954 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:22:41.461092 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:22:41.483986 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:22:41.484244 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:22:41.507393 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:22:41.507883 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:22:41.514600 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:22:41.514704 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:22:41.518536 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:22:41.518603 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:22:41.526002 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:22:41.526095 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:22:41.528265 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:22:41.528351 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:22:41.530547 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:22:41.530624 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:22:41.549520 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:22:41.551893 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:22:41.552002 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:22:41.554784 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:22:41.555619 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:22:41.567915 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:22:41.568011 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:22:41.570840 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:22:41.570919 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:22:41.591799 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:22:41.593363 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:22:41.601480 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:22:41.613480 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:22:41.632126 systemd[1]: Switching root. Feb 13 15:22:41.672710 systemd-journald[251]: Journal stopped Feb 13 15:22:43.587603 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Feb 13 15:22:43.587728 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:22:43.587771 kernel: SELinux: policy capability open_perms=1 Feb 13 15:22:43.587801 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:22:43.587831 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:22:43.587859 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:22:43.587888 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:22:43.587918 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:22:43.587952 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:22:43.587982 kernel: audit: type=1403 audit(1739460161.986:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:22:43.588021 systemd[1]: Successfully loaded SELinux policy in 51.749ms. Feb 13 15:22:43.588066 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.218ms. Feb 13 15:22:43.588100 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:22:43.588132 systemd[1]: Detected virtualization amazon. Feb 13 15:22:43.588161 systemd[1]: Detected architecture arm64. Feb 13 15:22:43.588190 systemd[1]: Detected first boot. Feb 13 15:22:43.591894 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:22:43.591953 zram_generator::config[1406]: No configuration found. Feb 13 15:22:43.591989 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:22:43.592022 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:22:43.592064 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:22:43.592093 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:22:43.592129 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:22:43.592164 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:22:43.592196 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:22:43.592275 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:22:43.592313 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:22:43.592346 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:22:43.592383 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:22:43.592414 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:22:43.592444 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:22:43.592477 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:22:43.592518 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:22:43.592549 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:22:43.592580 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:22:43.592611 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:22:43.592642 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:22:43.592673 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:22:43.592702 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:22:43.592730 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:22:43.592764 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:22:43.592794 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:22:43.592827 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:22:43.592858 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:22:43.592887 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:22:43.592917 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:22:43.592945 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:22:43.592978 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:22:43.593011 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:22:43.593042 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:22:43.593073 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:22:43.593101 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:22:43.593129 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:22:43.593157 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:22:43.593187 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:22:43.594341 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:22:43.594421 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:22:43.594454 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:22:43.594484 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:22:43.594515 systemd[1]: Reached target machines.target - Containers. Feb 13 15:22:43.594547 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:22:43.594575 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:22:43.594604 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:22:43.594633 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:22:43.594661 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:22:43.594695 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:22:43.594725 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:22:43.594753 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:22:43.594781 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:22:43.594810 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:22:43.594838 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:22:43.594869 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:22:43.594897 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:22:43.594930 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:22:43.594958 kernel: fuse: init (API version 7.39) Feb 13 15:22:43.594986 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:22:43.595014 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:22:43.595043 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:22:43.595075 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:22:43.595103 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:22:43.595134 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:22:43.595162 systemd[1]: Stopped verity-setup.service. Feb 13 15:22:43.595194 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:22:43.596645 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:22:43.596690 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:22:43.596720 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:22:43.596749 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:22:43.598999 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:22:43.599037 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:22:43.599076 kernel: loop: module loaded Feb 13 15:22:43.599109 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:22:43.599139 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:22:43.599170 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:22:43.599202 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:22:43.599252 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:22:43.599285 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:22:43.599322 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:22:43.599351 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:22:43.599380 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:22:43.599408 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:22:43.599441 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:22:43.599472 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:22:43.599506 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:22:43.599535 kernel: ACPI: bus type drm_connector registered Feb 13 15:22:43.599565 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:22:43.599593 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:22:43.599622 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:22:43.599650 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:22:43.599682 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:22:43.599756 systemd-journald[1484]: Collecting audit messages is disabled. Feb 13 15:22:43.599812 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:22:43.599842 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:22:43.599870 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:22:43.599898 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:22:43.599926 systemd-journald[1484]: Journal started Feb 13 15:22:43.599981 systemd-journald[1484]: Runtime Journal (/run/log/journal/ec23fd6f937b0c7c00ea92d08af93f69) is 8.0M, max 75.3M, 67.3M free. Feb 13 15:22:42.956548 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:22:42.979806 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 15:22:42.980679 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:22:43.606301 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:22:43.610187 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:22:43.666831 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:22:43.666929 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:22:43.672376 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:22:43.688537 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:22:43.697741 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:22:43.699931 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:22:43.704849 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:22:43.719490 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:22:43.721809 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:22:43.727377 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:22:43.734500 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:22:43.738067 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:22:43.742291 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:22:43.745002 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:22:43.787872 systemd-tmpfiles[1507]: ACLs are not supported, ignoring. Feb 13 15:22:43.793693 systemd-journald[1484]: Time spent on flushing to /var/log/journal/ec23fd6f937b0c7c00ea92d08af93f69 is 93.586ms for 894 entries. Feb 13 15:22:43.793693 systemd-journald[1484]: System Journal (/var/log/journal/ec23fd6f937b0c7c00ea92d08af93f69) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:22:43.915781 systemd-journald[1484]: Received client request to flush runtime journal. Feb 13 15:22:43.915889 kernel: loop0: detected capacity change from 0 to 116784 Feb 13 15:22:43.789321 systemd-tmpfiles[1507]: ACLs are not supported, ignoring. Feb 13 15:22:43.792405 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:22:43.799033 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:22:43.834752 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:22:43.847694 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:22:43.850384 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:22:43.852768 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:22:43.860644 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:22:43.928351 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:22:43.940462 udevadm[1546]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:22:43.946260 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:22:43.963855 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:22:43.968142 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:22:43.979322 kernel: loop1: detected capacity change from 0 to 113552 Feb 13 15:22:44.024312 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:22:44.035900 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:22:44.047295 kernel: loop2: detected capacity change from 0 to 189592 Feb 13 15:22:44.095125 systemd-tmpfiles[1560]: ACLs are not supported, ignoring. Feb 13 15:22:44.095746 systemd-tmpfiles[1560]: ACLs are not supported, ignoring. Feb 13 15:22:44.111692 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:22:44.183268 kernel: loop3: detected capacity change from 0 to 53784 Feb 13 15:22:44.302283 kernel: loop4: detected capacity change from 0 to 116784 Feb 13 15:22:44.330276 kernel: loop5: detected capacity change from 0 to 113552 Feb 13 15:22:44.356266 kernel: loop6: detected capacity change from 0 to 189592 Feb 13 15:22:44.407295 kernel: loop7: detected capacity change from 0 to 53784 Feb 13 15:22:44.416093 (sd-merge)[1565]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 15:22:44.417749 (sd-merge)[1565]: Merged extensions into '/usr'. Feb 13 15:22:44.427000 systemd[1]: Reloading requested from client PID 1541 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:22:44.427297 systemd[1]: Reloading... Feb 13 15:22:44.655252 zram_generator::config[1594]: No configuration found. Feb 13 15:22:44.670029 ldconfig[1537]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:22:44.930556 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:22:45.050360 systemd[1]: Reloading finished in 622 ms. Feb 13 15:22:45.089829 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:22:45.092600 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:22:45.095671 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:22:45.112657 systemd[1]: Starting ensure-sysext.service... Feb 13 15:22:45.124842 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:22:45.131616 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:22:45.154345 systemd[1]: Reloading requested from client PID 1644 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:22:45.154375 systemd[1]: Reloading... Feb 13 15:22:45.207116 systemd-tmpfiles[1645]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:22:45.207759 systemd-tmpfiles[1645]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:22:45.208619 systemd-udevd[1646]: Using default interface naming scheme 'v255'. Feb 13 15:22:45.213768 systemd-tmpfiles[1645]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:22:45.217527 systemd-tmpfiles[1645]: ACLs are not supported, ignoring. Feb 13 15:22:45.217691 systemd-tmpfiles[1645]: ACLs are not supported, ignoring. Feb 13 15:22:45.235870 systemd-tmpfiles[1645]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:22:45.235897 systemd-tmpfiles[1645]: Skipping /boot Feb 13 15:22:45.304691 systemd-tmpfiles[1645]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:22:45.304724 systemd-tmpfiles[1645]: Skipping /boot Feb 13 15:22:45.338253 zram_generator::config[1682]: No configuration found. Feb 13 15:22:45.524359 (udev-worker)[1710]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:22:45.604292 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1677) Feb 13 15:22:45.748424 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:22:45.905982 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:22:45.907464 systemd[1]: Reloading finished in 752 ms. Feb 13 15:22:45.931357 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:22:45.943280 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:22:46.009875 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:22:46.027697 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:22:46.030696 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:22:46.041867 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:22:46.081890 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:22:46.096174 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:22:46.099612 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:22:46.104360 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:22:46.111392 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:22:46.124729 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:22:46.133405 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:22:46.139463 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:22:46.140328 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:22:46.150403 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:22:46.152910 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:22:46.204420 systemd[1]: Finished ensure-sysext.service. Feb 13 15:22:46.214791 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:22:46.216346 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:22:46.243877 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:22:46.256096 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:22:46.266133 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:22:46.275957 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:22:46.290779 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:22:46.302888 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:22:46.306419 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:22:46.310701 augenrules[1873]: No rules Feb 13 15:22:46.311843 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:22:46.314242 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:22:46.319070 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:22:46.328615 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:22:46.340661 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:22:46.346333 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:22:46.349489 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:22:46.349861 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:22:46.352930 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:22:46.355906 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:22:46.359086 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:22:46.359649 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:22:46.362875 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:22:46.364308 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:22:46.367001 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:22:46.367811 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:22:46.407733 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:22:46.408319 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:22:46.408479 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:22:46.408537 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:22:46.434008 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:22:46.442173 lvm[1891]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:22:46.442873 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:22:46.484048 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:22:46.487735 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:22:46.510874 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:22:46.516145 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:22:46.539582 lvm[1897]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:22:46.587539 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:22:46.601336 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:22:46.661884 systemd-networkd[1852]: lo: Link UP Feb 13 15:22:46.661904 systemd-networkd[1852]: lo: Gained carrier Feb 13 15:22:46.664920 systemd-networkd[1852]: Enumeration completed Feb 13 15:22:46.665438 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:22:46.667804 systemd-networkd[1852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:22:46.667824 systemd-networkd[1852]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:22:46.670067 systemd-networkd[1852]: eth0: Link UP Feb 13 15:22:46.670530 systemd-networkd[1852]: eth0: Gained carrier Feb 13 15:22:46.670565 systemd-networkd[1852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:22:46.675572 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:22:46.678638 systemd-networkd[1852]: eth0: DHCPv4 address 172.31.20.64/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:22:46.680875 systemd-resolved[1854]: Positive Trust Anchors: Feb 13 15:22:46.680897 systemd-resolved[1854]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:22:46.680959 systemd-resolved[1854]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:22:46.690578 systemd-resolved[1854]: Defaulting to hostname 'linux'. Feb 13 15:22:46.693811 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:22:46.696268 systemd[1]: Reached target network.target - Network. Feb 13 15:22:46.698089 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:22:46.700351 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:22:46.702591 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:22:46.705161 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:22:46.707878 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:22:46.710370 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:22:46.712949 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:22:46.715383 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:22:46.715543 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:22:46.717287 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:22:46.721094 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:22:46.726338 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:22:46.737712 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:22:46.741107 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:22:46.743537 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:22:46.745727 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:22:46.747611 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:22:46.747672 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:22:46.754511 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:22:46.767579 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:22:46.772740 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:22:46.790380 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:22:46.796700 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:22:46.798724 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:22:46.803582 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:22:46.823767 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 15:22:46.831437 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 15:22:46.839536 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:22:46.845541 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:22:46.861558 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:22:46.866408 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:22:46.867281 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:22:46.870565 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:22:46.877694 jq[1915]: false Feb 13 15:22:46.877767 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:22:46.903059 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:22:46.906291 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:22:46.950263 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:22:46.952911 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:22:46.964526 dbus-daemon[1914]: [system] SELinux support is enabled Feb 13 15:22:46.970435 dbus-daemon[1914]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1852 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 15:22:46.983913 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:22:47.002069 jq[1927]: true Feb 13 15:22:47.018310 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:22:47.002150 ntpd[1920]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:31:02 UTC 2025 (1): Starting Feb 13 15:22:47.034735 extend-filesystems[1916]: Found loop4 Feb 13 15:22:47.034735 extend-filesystems[1916]: Found loop5 Feb 13 15:22:47.034735 extend-filesystems[1916]: Found loop6 Feb 13 15:22:47.034735 extend-filesystems[1916]: Found loop7 Feb 13 15:22:47.034735 extend-filesystems[1916]: Found nvme0n1 Feb 13 15:22:47.034735 extend-filesystems[1916]: Found nvme0n1p1 Feb 13 15:22:47.034735 extend-filesystems[1916]: Found nvme0n1p2 Feb 13 15:22:47.034735 extend-filesystems[1916]: Found nvme0n1p3 Feb 13 15:22:47.034735 extend-filesystems[1916]: Found usr Feb 13 15:22:47.034735 extend-filesystems[1916]: Found nvme0n1p4 Feb 13 15:22:47.034735 extend-filesystems[1916]: Found nvme0n1p6 Feb 13 15:22:47.034735 extend-filesystems[1916]: Found nvme0n1p7 Feb 13 15:22:47.034735 extend-filesystems[1916]: Found nvme0n1p9 Feb 13 15:22:47.034735 extend-filesystems[1916]: Checking size of /dev/nvme0n1p9 Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:31:02 UTC 2025 (1): Starting Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: ---------------------------------------------------- Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: corporation. Support and training for ntp-4 are Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: available at https://www.nwtime.org/support Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: ---------------------------------------------------- Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: proto: precision = 0.108 usec (-23) Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: basedate set to 2025-02-01 Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: gps base set to 2025-02-02 (week 2352) Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: Listen normally on 3 eth0 172.31.20.64:123 Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: Listen normally on 4 lo [::1]:123 Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: bind(21) AF_INET6 fe80::430:c8ff:fed4:2685%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: unable to create socket on eth0 (5) for fe80::430:c8ff:fed4:2685%2#123 Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: failed to init interface for address fe80::430:c8ff:fed4:2685%2 Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: Listening on routing socket on fd #21 for interface updates Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:22:47.107652 ntpd[1920]: 13 Feb 15:22:47 ntpd[1920]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:22:47.018656 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:22:47.002197 ntpd[1920]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:22:47.032795 (ntainerd)[1944]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:22:47.011594 ntpd[1920]: ---------------------------------------------------- Feb 13 15:22:47.049989 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:22:47.122380 jq[1947]: true Feb 13 15:22:47.011631 ntpd[1920]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:22:47.050046 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:22:47.011650 ntpd[1920]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:22:47.066590 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:22:47.011668 ntpd[1920]: corporation. Support and training for ntp-4 are Feb 13 15:22:47.066634 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:22:47.011685 ntpd[1920]: available at https://www.nwtime.org/support Feb 13 15:22:47.122144 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 15:22:47.011703 ntpd[1920]: ---------------------------------------------------- Feb 13 15:22:47.021315 ntpd[1920]: proto: precision = 0.108 usec (-23) Feb 13 15:22:47.028087 ntpd[1920]: basedate set to 2025-02-01 Feb 13 15:22:47.028125 ntpd[1920]: gps base set to 2025-02-02 (week 2352) Feb 13 15:22:47.044178 ntpd[1920]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:22:47.046397 ntpd[1920]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:22:47.049182 ntpd[1920]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:22:47.050345 ntpd[1920]: Listen normally on 3 eth0 172.31.20.64:123 Feb 13 15:22:47.050426 ntpd[1920]: Listen normally on 4 lo [::1]:123 Feb 13 15:22:47.050515 ntpd[1920]: bind(21) AF_INET6 fe80::430:c8ff:fed4:2685%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:22:47.050552 ntpd[1920]: unable to create socket on eth0 (5) for fe80::430:c8ff:fed4:2685%2#123 Feb 13 15:22:47.050580 ntpd[1920]: failed to init interface for address fe80::430:c8ff:fed4:2685%2 Feb 13 15:22:47.050635 ntpd[1920]: Listening on routing socket on fd #21 for interface updates Feb 13 15:22:47.051537 dbus-daemon[1914]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 15:22:47.095293 ntpd[1920]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:22:47.095342 ntpd[1920]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:22:47.132243 update_engine[1926]: I20250213 15:22:47.128647 1926 main.cc:92] Flatcar Update Engine starting Feb 13 15:22:47.146673 extend-filesystems[1916]: Resized partition /dev/nvme0n1p9 Feb 13 15:22:47.156883 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:22:47.161505 extend-filesystems[1961]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:22:47.168023 update_engine[1926]: I20250213 15:22:47.167586 1926 update_check_scheduler.cc:74] Next update check in 11m21s Feb 13 15:22:47.174254 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 15:22:47.188688 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:22:47.243479 coreos-metadata[1913]: Feb 13 15:22:47.242 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:22:47.257402 coreos-metadata[1913]: Feb 13 15:22:47.249 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 15:22:47.257402 coreos-metadata[1913]: Feb 13 15:22:47.250 INFO Fetch successful Feb 13 15:22:47.257402 coreos-metadata[1913]: Feb 13 15:22:47.252 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 15:22:47.257402 coreos-metadata[1913]: Feb 13 15:22:47.253 INFO Fetch successful Feb 13 15:22:47.257402 coreos-metadata[1913]: Feb 13 15:22:47.253 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 15:22:47.257402 coreos-metadata[1913]: Feb 13 15:22:47.255 INFO Fetch successful Feb 13 15:22:47.257402 coreos-metadata[1913]: Feb 13 15:22:47.255 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 15:22:47.257402 coreos-metadata[1913]: Feb 13 15:22:47.256 INFO Fetch successful Feb 13 15:22:47.257402 coreos-metadata[1913]: Feb 13 15:22:47.256 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 15:22:47.264173 coreos-metadata[1913]: Feb 13 15:22:47.257 INFO Fetch failed with 404: resource not found Feb 13 15:22:47.264173 coreos-metadata[1913]: Feb 13 15:22:47.257 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 15:22:47.264173 coreos-metadata[1913]: Feb 13 15:22:47.261 INFO Fetch successful Feb 13 15:22:47.264173 coreos-metadata[1913]: Feb 13 15:22:47.261 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 15:22:47.264173 coreos-metadata[1913]: Feb 13 15:22:47.262 INFO Fetch successful Feb 13 15:22:47.264173 coreos-metadata[1913]: Feb 13 15:22:47.262 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 15:22:47.264173 coreos-metadata[1913]: Feb 13 15:22:47.264 INFO Fetch successful Feb 13 15:22:47.264173 coreos-metadata[1913]: Feb 13 15:22:47.264 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 15:22:47.264173 coreos-metadata[1913]: Feb 13 15:22:47.265 INFO Fetch successful Feb 13 15:22:47.264173 coreos-metadata[1913]: Feb 13 15:22:47.265 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 15:22:47.264173 coreos-metadata[1913]: Feb 13 15:22:47.266 INFO Fetch successful Feb 13 15:22:47.282190 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 15:22:47.334878 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 15:22:47.370273 extend-filesystems[1961]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 15:22:47.370273 extend-filesystems[1961]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:22:47.370273 extend-filesystems[1961]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 15:22:47.382355 extend-filesystems[1916]: Resized filesystem in /dev/nvme0n1p9 Feb 13 15:22:47.373300 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:22:47.375666 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:22:47.430265 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:22:47.433974 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:22:47.442407 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1710) Feb 13 15:22:47.448191 systemd-logind[1925]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:22:47.449367 systemd-logind[1925]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 15:22:47.450587 systemd-logind[1925]: New seat seat0. Feb 13 15:22:47.456743 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:22:47.461271 bash[2004]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:22:47.470318 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:22:47.516209 locksmithd[1962]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:22:47.532073 systemd[1]: Starting sshkeys.service... Feb 13 15:22:47.578753 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:22:47.626065 dbus-daemon[1914]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 15:22:47.629696 dbus-daemon[1914]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1955 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 15:22:47.631386 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:22:47.636337 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 15:22:47.648775 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 15:22:47.768105 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:22:47.770783 containerd[1944]: time="2025-02-13T15:22:47.766158297Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:22:47.773811 polkitd[2022]: Started polkitd version 121 Feb 13 15:22:47.797101 polkitd[2022]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 15:22:47.801618 polkitd[2022]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 15:22:47.806393 polkitd[2022]: Finished loading, compiling and executing 2 rules Feb 13 15:22:47.807903 dbus-daemon[1914]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 15:22:47.809762 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 15:22:47.812468 polkitd[2022]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 15:22:47.875991 systemd-hostnamed[1955]: Hostname set to (transient) Feb 13 15:22:47.878310 systemd-resolved[1854]: System hostname changed to 'ip-172-31-20-64'. Feb 13 15:22:47.887797 coreos-metadata[2019]: Feb 13 15:22:47.886 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:22:47.889796 coreos-metadata[2019]: Feb 13 15:22:47.888 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 15:22:47.890876 coreos-metadata[2019]: Feb 13 15:22:47.890 INFO Fetch successful Feb 13 15:22:47.890876 coreos-metadata[2019]: Feb 13 15:22:47.890 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 15:22:47.894921 coreos-metadata[2019]: Feb 13 15:22:47.894 INFO Fetch successful Feb 13 15:22:47.899956 unknown[2019]: wrote ssh authorized keys file for user: core Feb 13 15:22:47.966570 update-ssh-keys[2090]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:22:47.969359 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:22:47.975128 containerd[1944]: time="2025-02-13T15:22:47.974819614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:22:47.978308 systemd[1]: Finished sshkeys.service. Feb 13 15:22:47.983683 containerd[1944]: time="2025-02-13T15:22:47.980838574Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:22:47.983683 containerd[1944]: time="2025-02-13T15:22:47.980898382Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:22:47.983683 containerd[1944]: time="2025-02-13T15:22:47.980934070Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:22:47.983683 containerd[1944]: time="2025-02-13T15:22:47.981761902Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:22:47.983683 containerd[1944]: time="2025-02-13T15:22:47.981812338Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:22:47.983683 containerd[1944]: time="2025-02-13T15:22:47.981939610Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:22:47.983683 containerd[1944]: time="2025-02-13T15:22:47.981971278Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:22:47.983683 containerd[1944]: time="2025-02-13T15:22:47.982304134Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:22:47.983683 containerd[1944]: time="2025-02-13T15:22:47.982335562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:22:47.983683 containerd[1944]: time="2025-02-13T15:22:47.982365694Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:22:47.983683 containerd[1944]: time="2025-02-13T15:22:47.982389154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:22:47.984194 containerd[1944]: time="2025-02-13T15:22:47.982552870Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:22:47.984194 containerd[1944]: time="2025-02-13T15:22:47.982953514Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:22:47.984194 containerd[1944]: time="2025-02-13T15:22:47.983133226Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:22:47.984194 containerd[1944]: time="2025-02-13T15:22:47.983160802Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:22:47.985867 containerd[1944]: time="2025-02-13T15:22:47.985816546Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:22:47.986122 containerd[1944]: time="2025-02-13T15:22:47.986079166Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:22:47.995339 containerd[1944]: time="2025-02-13T15:22:47.995139899Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:22:47.995764 containerd[1944]: time="2025-02-13T15:22:47.995713595Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:22:47.995950 containerd[1944]: time="2025-02-13T15:22:47.995920751Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:22:47.996109 containerd[1944]: time="2025-02-13T15:22:47.996075191Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:22:47.996283 containerd[1944]: time="2025-02-13T15:22:47.996194423Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:22:47.999265 containerd[1944]: time="2025-02-13T15:22:47.998066963Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:22:47.999265 containerd[1944]: time="2025-02-13T15:22:47.998562875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:22:47.999265 containerd[1944]: time="2025-02-13T15:22:47.998872703Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:22:47.999265 containerd[1944]: time="2025-02-13T15:22:47.998916755Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:22:47.999265 containerd[1944]: time="2025-02-13T15:22:47.998957687Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:22:47.999265 containerd[1944]: time="2025-02-13T15:22:47.998992979Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:22:47.999265 containerd[1944]: time="2025-02-13T15:22:47.999029699Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:22:47.999265 containerd[1944]: time="2025-02-13T15:22:47.999066947Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:22:47.999265 containerd[1944]: time="2025-02-13T15:22:47.999105059Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:22:47.999265 containerd[1944]: time="2025-02-13T15:22:47.999147611Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:22:47.999265 containerd[1944]: time="2025-02-13T15:22:47.999191495Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:22:47.999861 containerd[1944]: time="2025-02-13T15:22:47.999829487Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:22:47.999972 containerd[1944]: time="2025-02-13T15:22:47.999942575Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:22:48.000097 containerd[1944]: time="2025-02-13T15:22:48.000066751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:22:48.000246 containerd[1944]: time="2025-02-13T15:22:48.000191455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:22:48.000379 containerd[1944]: time="2025-02-13T15:22:48.000351319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:22:48.000490 containerd[1944]: time="2025-02-13T15:22:48.000462091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:22:48.000605 containerd[1944]: time="2025-02-13T15:22:48.000577411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:22:48.000736 containerd[1944]: time="2025-02-13T15:22:48.000708451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:22:48.003296 containerd[1944]: time="2025-02-13T15:22:48.001391635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:22:48.003296 containerd[1944]: time="2025-02-13T15:22:48.001449295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:22:48.003296 containerd[1944]: time="2025-02-13T15:22:48.001488043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:22:48.003296 containerd[1944]: time="2025-02-13T15:22:48.001524823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:22:48.003296 containerd[1944]: time="2025-02-13T15:22:48.001554379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:22:48.003296 containerd[1944]: time="2025-02-13T15:22:48.001584043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:22:48.003296 containerd[1944]: time="2025-02-13T15:22:48.001612963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:22:48.003296 containerd[1944]: time="2025-02-13T15:22:48.001646503Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:22:48.003296 containerd[1944]: time="2025-02-13T15:22:48.001693963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:22:48.003296 containerd[1944]: time="2025-02-13T15:22:48.001724467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:22:48.003296 containerd[1944]: time="2025-02-13T15:22:48.001754851Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:22:48.003296 containerd[1944]: time="2025-02-13T15:22:48.001898479Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:22:48.003296 containerd[1944]: time="2025-02-13T15:22:48.001938931Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:22:48.003296 containerd[1944]: time="2025-02-13T15:22:48.001963423Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:22:48.004028 containerd[1944]: time="2025-02-13T15:22:48.001992751Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:22:48.004028 containerd[1944]: time="2025-02-13T15:22:48.002371531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:22:48.004028 containerd[1944]: time="2025-02-13T15:22:48.002428387Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:22:48.004028 containerd[1944]: time="2025-02-13T15:22:48.002457823Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:22:48.004028 containerd[1944]: time="2025-02-13T15:22:48.002536471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:22:48.008583 containerd[1944]: time="2025-02-13T15:22:48.007546267Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:22:48.008583 containerd[1944]: time="2025-02-13T15:22:48.008569303Z" level=info msg="Connect containerd service" Feb 13 15:22:48.008899 containerd[1944]: time="2025-02-13T15:22:48.008664163Z" level=info msg="using legacy CRI server" Feb 13 15:22:48.008899 containerd[1944]: time="2025-02-13T15:22:48.008683723Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:22:48.009049 containerd[1944]: time="2025-02-13T15:22:48.008975755Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:22:48.012317 ntpd[1920]: bind(24) AF_INET6 fe80::430:c8ff:fed4:2685%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:22:48.012987 ntpd[1920]: 13 Feb 15:22:48 ntpd[1920]: bind(24) AF_INET6 fe80::430:c8ff:fed4:2685%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:22:48.012987 ntpd[1920]: 13 Feb 15:22:48 ntpd[1920]: unable to create socket on eth0 (6) for fe80::430:c8ff:fed4:2685%2#123 Feb 13 15:22:48.012987 ntpd[1920]: 13 Feb 15:22:48 ntpd[1920]: failed to init interface for address fe80::430:c8ff:fed4:2685%2 Feb 13 15:22:48.012382 ntpd[1920]: unable to create socket on eth0 (6) for fe80::430:c8ff:fed4:2685%2#123 Feb 13 15:22:48.012409 ntpd[1920]: failed to init interface for address fe80::430:c8ff:fed4:2685%2 Feb 13 15:22:48.014704 containerd[1944]: time="2025-02-13T15:22:48.014629039Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:22:48.017160 containerd[1944]: time="2025-02-13T15:22:48.015362923Z" level=info msg="Start subscribing containerd event" Feb 13 15:22:48.017160 containerd[1944]: time="2025-02-13T15:22:48.015493207Z" level=info msg="Start recovering state" Feb 13 15:22:48.017160 containerd[1944]: time="2025-02-13T15:22:48.015672931Z" level=info msg="Start event monitor" Feb 13 15:22:48.017160 containerd[1944]: time="2025-02-13T15:22:48.015701107Z" level=info msg="Start snapshots syncer" Feb 13 15:22:48.017160 containerd[1944]: time="2025-02-13T15:22:48.015727279Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:22:48.017160 containerd[1944]: time="2025-02-13T15:22:48.015747199Z" level=info msg="Start streaming server" Feb 13 15:22:48.017699 containerd[1944]: time="2025-02-13T15:22:48.017610187Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:22:48.018197 containerd[1944]: time="2025-02-13T15:22:48.017792515Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:22:48.018197 containerd[1944]: time="2025-02-13T15:22:48.017955547Z" level=info msg="containerd successfully booted in 0.259077s" Feb 13 15:22:48.018384 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:22:48.153414 systemd-networkd[1852]: eth0: Gained IPv6LL Feb 13 15:22:48.158289 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:22:48.163618 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:22:48.177625 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 15:22:48.187384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:22:48.193811 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:22:48.291743 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:22:48.303994 amazon-ssm-agent[2117]: Initializing new seelog logger Feb 13 15:22:48.304737 amazon-ssm-agent[2117]: New Seelog Logger Creation Complete Feb 13 15:22:48.304927 amazon-ssm-agent[2117]: 2025/02/13 15:22:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:22:48.305006 amazon-ssm-agent[2117]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:22:48.305707 amazon-ssm-agent[2117]: 2025/02/13 15:22:48 processing appconfig overrides Feb 13 15:22:48.306412 amazon-ssm-agent[2117]: 2025/02/13 15:22:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:22:48.306501 amazon-ssm-agent[2117]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:22:48.308006 amazon-ssm-agent[2117]: 2025/02/13 15:22:48 processing appconfig overrides Feb 13 15:22:48.308006 amazon-ssm-agent[2117]: 2025/02/13 15:22:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:22:48.308006 amazon-ssm-agent[2117]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:22:48.308006 amazon-ssm-agent[2117]: 2025/02/13 15:22:48 processing appconfig overrides Feb 13 15:22:48.308006 amazon-ssm-agent[2117]: 2025-02-13 15:22:48 INFO Proxy environment variables: Feb 13 15:22:48.312248 amazon-ssm-agent[2117]: 2025/02/13 15:22:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:22:48.312248 amazon-ssm-agent[2117]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:22:48.314259 amazon-ssm-agent[2117]: 2025/02/13 15:22:48 processing appconfig overrides Feb 13 15:22:48.408129 amazon-ssm-agent[2117]: 2025-02-13 15:22:48 INFO http_proxy: Feb 13 15:22:48.446726 sshd_keygen[1945]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:22:48.494297 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:22:48.507424 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:22:48.511249 amazon-ssm-agent[2117]: 2025-02-13 15:22:48 INFO no_proxy: Feb 13 15:22:48.521083 systemd[1]: Started sshd@0-172.31.20.64:22-147.75.109.163:43716.service - OpenSSH per-connection server daemon (147.75.109.163:43716). Feb 13 15:22:48.544936 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:22:48.545643 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:22:48.560810 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:22:48.610211 amazon-ssm-agent[2117]: 2025-02-13 15:22:48 INFO https_proxy: Feb 13 15:22:48.615300 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:22:48.632388 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:22:48.639612 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:22:48.642883 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:22:48.710265 amazon-ssm-agent[2117]: 2025-02-13 15:22:48 INFO Checking if agent identity type OnPrem can be assumed Feb 13 15:22:48.808494 amazon-ssm-agent[2117]: 2025-02-13 15:22:48 INFO Checking if agent identity type EC2 can be assumed Feb 13 15:22:48.819796 sshd[2142]: Accepted publickey for core from 147.75.109.163 port 43716 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:48.825968 sshd-session[2142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:48.849010 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:22:48.858693 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:22:48.868700 systemd-logind[1925]: New session 1 of user core. Feb 13 15:22:48.918541 amazon-ssm-agent[2117]: 2025-02-13 15:22:48 INFO Agent will take identity from EC2 Feb 13 15:22:48.913368 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:22:48.931841 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:22:48.955896 (systemd)[2155]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:22:49.007249 amazon-ssm-agent[2117]: 2025-02-13 15:22:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:22:49.106068 amazon-ssm-agent[2117]: 2025-02-13 15:22:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:22:49.173294 amazon-ssm-agent[2117]: 2025-02-13 15:22:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:22:49.173294 amazon-ssm-agent[2117]: 2025-02-13 15:22:48 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 15:22:49.173294 amazon-ssm-agent[2117]: 2025-02-13 15:22:48 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 15:22:49.173294 amazon-ssm-agent[2117]: 2025-02-13 15:22:48 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 15:22:49.173294 amazon-ssm-agent[2117]: 2025-02-13 15:22:48 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 15:22:49.173294 amazon-ssm-agent[2117]: 2025-02-13 15:22:48 INFO [Registrar] Starting registrar module Feb 13 15:22:49.173294 amazon-ssm-agent[2117]: 2025-02-13 15:22:48 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 15:22:49.173294 amazon-ssm-agent[2117]: 2025-02-13 15:22:49 INFO [EC2Identity] EC2 registration was successful. Feb 13 15:22:49.173294 amazon-ssm-agent[2117]: 2025-02-13 15:22:49 INFO [CredentialRefresher] credentialRefresher has started Feb 13 15:22:49.173294 amazon-ssm-agent[2117]: 2025-02-13 15:22:49 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 15:22:49.173294 amazon-ssm-agent[2117]: 2025-02-13 15:22:49 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 15:22:49.205662 amazon-ssm-agent[2117]: 2025-02-13 15:22:49 INFO [CredentialRefresher] Next credential rotation will be in 30.566615480366668 minutes Feb 13 15:22:49.214775 systemd[2155]: Queued start job for default target default.target. Feb 13 15:22:49.223729 systemd[2155]: Created slice app.slice - User Application Slice. Feb 13 15:22:49.223797 systemd[2155]: Reached target paths.target - Paths. Feb 13 15:22:49.223830 systemd[2155]: Reached target timers.target - Timers. Feb 13 15:22:49.228476 systemd[2155]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:22:49.249835 systemd[2155]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:22:49.249962 systemd[2155]: Reached target sockets.target - Sockets. Feb 13 15:22:49.249995 systemd[2155]: Reached target basic.target - Basic System. Feb 13 15:22:49.250090 systemd[2155]: Reached target default.target - Main User Target. Feb 13 15:22:49.250173 systemd[2155]: Startup finished in 269ms. Feb 13 15:22:49.250708 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:22:49.265139 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:22:49.427013 systemd[1]: Started sshd@1-172.31.20.64:22-147.75.109.163:55016.service - OpenSSH per-connection server daemon (147.75.109.163:55016). Feb 13 15:22:49.630995 sshd[2166]: Accepted publickey for core from 147.75.109.163 port 55016 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:49.634799 sshd-session[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:49.644857 systemd-logind[1925]: New session 2 of user core. Feb 13 15:22:49.654530 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:22:49.784583 sshd[2168]: Connection closed by 147.75.109.163 port 55016 Feb 13 15:22:49.785630 sshd-session[2166]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:49.791110 systemd[1]: sshd@1-172.31.20.64:22-147.75.109.163:55016.service: Deactivated successfully. Feb 13 15:22:49.794835 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:22:49.799699 systemd-logind[1925]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:22:49.802073 systemd-logind[1925]: Removed session 2. Feb 13 15:22:49.827670 systemd[1]: Started sshd@2-172.31.20.64:22-147.75.109.163:55032.service - OpenSSH per-connection server daemon (147.75.109.163:55032). Feb 13 15:22:50.025209 sshd[2173]: Accepted publickey for core from 147.75.109.163 port 55032 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:50.026688 sshd-session[2173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:50.035246 systemd-logind[1925]: New session 3 of user core. Feb 13 15:22:50.040570 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:22:50.175021 sshd[2175]: Connection closed by 147.75.109.163 port 55032 Feb 13 15:22:50.174700 sshd-session[2173]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:50.181383 systemd[1]: sshd@2-172.31.20.64:22-147.75.109.163:55032.service: Deactivated successfully. Feb 13 15:22:50.182325 systemd-logind[1925]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:22:50.187865 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:22:50.194670 systemd-logind[1925]: Removed session 3. Feb 13 15:22:50.217867 amazon-ssm-agent[2117]: 2025-02-13 15:22:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 15:22:50.299762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:22:50.304104 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:22:50.307591 systemd[1]: Startup finished in 1.077s (kernel) + 7.200s (initrd) + 8.371s (userspace) = 16.649s. Feb 13 15:22:50.315173 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:22:50.318121 amazon-ssm-agent[2117]: 2025-02-13 15:22:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2180) started Feb 13 15:22:50.340931 agetty[2151]: failed to open credentials directory Feb 13 15:22:50.348421 agetty[2149]: failed to open credentials directory Feb 13 15:22:50.418568 amazon-ssm-agent[2117]: 2025-02-13 15:22:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 15:22:51.012326 ntpd[1920]: Listen normally on 7 eth0 [fe80::430:c8ff:fed4:2685%2]:123 Feb 13 15:22:51.012780 ntpd[1920]: 13 Feb 15:22:51 ntpd[1920]: Listen normally on 7 eth0 [fe80::430:c8ff:fed4:2685%2]:123 Feb 13 15:22:51.324908 kubelet[2189]: E0213 15:22:51.324825 2189 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:22:51.329931 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:22:51.330708 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:22:51.331488 systemd[1]: kubelet.service: Consumed 1.249s CPU time. Feb 13 15:22:54.268353 systemd-resolved[1854]: Clock change detected. Flushing caches. Feb 13 15:23:00.461818 systemd[1]: Started sshd@3-172.31.20.64:22-147.75.109.163:43592.service - OpenSSH per-connection server daemon (147.75.109.163:43592). Feb 13 15:23:00.650512 sshd[2208]: Accepted publickey for core from 147.75.109.163 port 43592 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:00.652838 sshd-session[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:00.660160 systemd-logind[1925]: New session 4 of user core. Feb 13 15:23:00.672138 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:23:00.796128 sshd[2210]: Connection closed by 147.75.109.163 port 43592 Feb 13 15:23:00.795935 sshd-session[2208]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:00.802814 systemd[1]: sshd@3-172.31.20.64:22-147.75.109.163:43592.service: Deactivated successfully. Feb 13 15:23:00.806723 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:23:00.808312 systemd-logind[1925]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:23:00.810350 systemd-logind[1925]: Removed session 4. Feb 13 15:23:00.837355 systemd[1]: Started sshd@4-172.31.20.64:22-147.75.109.163:43596.service - OpenSSH per-connection server daemon (147.75.109.163:43596). Feb 13 15:23:01.018973 sshd[2215]: Accepted publickey for core from 147.75.109.163 port 43596 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:01.022233 sshd-session[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:01.031384 systemd-logind[1925]: New session 5 of user core. Feb 13 15:23:01.037130 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:23:01.157112 sshd[2217]: Connection closed by 147.75.109.163 port 43596 Feb 13 15:23:01.157993 sshd-session[2215]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:01.163608 systemd[1]: sshd@4-172.31.20.64:22-147.75.109.163:43596.service: Deactivated successfully. Feb 13 15:23:01.167506 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:23:01.169061 systemd-logind[1925]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:23:01.171238 systemd-logind[1925]: Removed session 5. Feb 13 15:23:01.198400 systemd[1]: Started sshd@5-172.31.20.64:22-147.75.109.163:43600.service - OpenSSH per-connection server daemon (147.75.109.163:43600). Feb 13 15:23:01.376984 sshd[2222]: Accepted publickey for core from 147.75.109.163 port 43600 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:01.380732 sshd-session[2222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:01.389196 systemd-logind[1925]: New session 6 of user core. Feb 13 15:23:01.402502 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:23:01.527059 sshd[2224]: Connection closed by 147.75.109.163 port 43600 Feb 13 15:23:01.526833 sshd-session[2222]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:01.533464 systemd[1]: sshd@5-172.31.20.64:22-147.75.109.163:43600.service: Deactivated successfully. Feb 13 15:23:01.537707 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:23:01.539447 systemd-logind[1925]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:23:01.541348 systemd-logind[1925]: Removed session 6. Feb 13 15:23:01.566395 systemd[1]: Started sshd@6-172.31.20.64:22-147.75.109.163:43604.service - OpenSSH per-connection server daemon (147.75.109.163:43604). Feb 13 15:23:01.709769 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:23:01.723199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:23:01.741435 sshd[2229]: Accepted publickey for core from 147.75.109.163 port 43604 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:01.744006 sshd-session[2229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:01.752038 systemd-logind[1925]: New session 7 of user core. Feb 13 15:23:01.761158 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:23:01.885096 sudo[2235]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:23:01.886420 sudo[2235]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:23:01.906367 sudo[2235]: pam_unix(sudo:session): session closed for user root Feb 13 15:23:01.929953 sshd[2234]: Connection closed by 147.75.109.163 port 43604 Feb 13 15:23:01.931218 sshd-session[2229]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:01.936702 systemd-logind[1925]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:23:01.937848 systemd[1]: sshd@6-172.31.20.64:22-147.75.109.163:43604.service: Deactivated successfully. Feb 13 15:23:01.942059 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:23:01.946777 systemd-logind[1925]: Removed session 7. Feb 13 15:23:01.963370 systemd[1]: Started sshd@7-172.31.20.64:22-147.75.109.163:43612.service - OpenSSH per-connection server daemon (147.75.109.163:43612). Feb 13 15:23:02.071299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:23:02.072806 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:23:02.154945 kubelet[2246]: E0213 15:23:02.152618 2246 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:23:02.161188 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:23:02.161582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:23:02.163920 sshd[2240]: Accepted publickey for core from 147.75.109.163 port 43612 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:02.166158 sshd-session[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:02.174072 systemd-logind[1925]: New session 8 of user core. Feb 13 15:23:02.183127 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:23:02.284723 sudo[2257]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:23:02.285932 sudo[2257]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:23:02.292148 sudo[2257]: pam_unix(sudo:session): session closed for user root Feb 13 15:23:02.302080 sudo[2256]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:23:02.302676 sudo[2256]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:23:02.334206 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:23:02.380486 augenrules[2279]: No rules Feb 13 15:23:02.382694 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:23:02.384001 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:23:02.386077 sudo[2256]: pam_unix(sudo:session): session closed for user root Feb 13 15:23:02.408698 sshd[2255]: Connection closed by 147.75.109.163 port 43612 Feb 13 15:23:02.408506 sshd-session[2240]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:02.414323 systemd[1]: sshd@7-172.31.20.64:22-147.75.109.163:43612.service: Deactivated successfully. Feb 13 15:23:02.417081 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:23:02.421070 systemd-logind[1925]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:23:02.423173 systemd-logind[1925]: Removed session 8. Feb 13 15:23:02.448391 systemd[1]: Started sshd@8-172.31.20.64:22-147.75.109.163:43620.service - OpenSSH per-connection server daemon (147.75.109.163:43620). Feb 13 15:23:02.633176 sshd[2287]: Accepted publickey for core from 147.75.109.163 port 43620 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:02.635519 sshd-session[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:02.642245 systemd-logind[1925]: New session 9 of user core. Feb 13 15:23:02.651130 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:23:02.752792 sudo[2290]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:23:02.754063 sudo[2290]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:23:03.670578 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:23:03.684482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:23:03.724222 systemd[1]: Reloading requested from client PID 2322 ('systemctl') (unit session-9.scope)... Feb 13 15:23:03.724250 systemd[1]: Reloading... Feb 13 15:23:03.992913 zram_generator::config[2362]: No configuration found. Feb 13 15:23:04.224079 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:23:04.392234 systemd[1]: Reloading finished in 665 ms. Feb 13 15:23:04.494087 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:23:04.494475 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:23:04.496043 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:23:04.506466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:23:04.806310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:23:04.807306 (kubelet)[2426]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:23:04.884817 kubelet[2426]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:23:04.885910 kubelet[2426]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:23:04.885910 kubelet[2426]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:23:04.885910 kubelet[2426]: I0213 15:23:04.885450 2426 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:23:07.327930 kubelet[2426]: I0213 15:23:07.327833 2426 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:23:07.327930 kubelet[2426]: I0213 15:23:07.327905 2426 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:23:07.328647 kubelet[2426]: I0213 15:23:07.328445 2426 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:23:07.373541 kubelet[2426]: I0213 15:23:07.373276 2426 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:23:07.387428 kubelet[2426]: E0213 15:23:07.387376 2426 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:23:07.387752 kubelet[2426]: I0213 15:23:07.387616 2426 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:23:07.395054 kubelet[2426]: I0213 15:23:07.394959 2426 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:23:07.396329 kubelet[2426]: I0213 15:23:07.396282 2426 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:23:07.396641 kubelet[2426]: I0213 15:23:07.396584 2426 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:23:07.397782 kubelet[2426]: I0213 15:23:07.396636 2426 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.20.64","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:23:07.397782 kubelet[2426]: I0213 15:23:07.397345 2426 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:23:07.397782 kubelet[2426]: I0213 15:23:07.397378 2426 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:23:07.397782 kubelet[2426]: I0213 15:23:07.397606 2426 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:23:07.401805 kubelet[2426]: I0213 15:23:07.401758 2426 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:23:07.402545 kubelet[2426]: I0213 15:23:07.402018 2426 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:23:07.402545 kubelet[2426]: I0213 15:23:07.402092 2426 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:23:07.402545 kubelet[2426]: I0213 15:23:07.402115 2426 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:23:07.409965 kubelet[2426]: E0213 15:23:07.409913 2426 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:07.410126 kubelet[2426]: E0213 15:23:07.410004 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:07.411339 kubelet[2426]: I0213 15:23:07.410908 2426 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:23:07.415138 kubelet[2426]: I0213 15:23:07.414554 2426 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:23:07.415138 kubelet[2426]: W0213 15:23:07.414714 2426 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:23:07.416359 kubelet[2426]: I0213 15:23:07.416316 2426 server.go:1269] "Started kubelet" Feb 13 15:23:07.419340 kubelet[2426]: I0213 15:23:07.419287 2426 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:23:07.432919 kubelet[2426]: I0213 15:23:07.431376 2426 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:23:07.432919 kubelet[2426]: I0213 15:23:07.432503 2426 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:23:07.433110 kubelet[2426]: E0213 15:23:07.432966 2426 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.20.64\" not found" Feb 13 15:23:07.433618 kubelet[2426]: I0213 15:23:07.433592 2426 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:23:07.437243 kubelet[2426]: I0213 15:23:07.434396 2426 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:23:07.438377 kubelet[2426]: I0213 15:23:07.438344 2426 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:23:07.438581 kubelet[2426]: I0213 15:23:07.437395 2426 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:23:07.438687 kubelet[2426]: I0213 15:23:07.434905 2426 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:23:07.438971 kubelet[2426]: I0213 15:23:07.437494 2426 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:23:07.440517 kubelet[2426]: E0213 15:23:07.439649 2426 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:23:07.441405 kubelet[2426]: I0213 15:23:07.441373 2426 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:23:07.441772 kubelet[2426]: I0213 15:23:07.441734 2426 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:23:07.448108 kubelet[2426]: E0213 15:23:07.444466 2426 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.20.64.1823cdd30be195d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.20.64,UID:172.31.20.64,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.20.64,},FirstTimestamp:2025-02-13 15:23:07.416270289 +0000 UTC m=+2.600181050,LastTimestamp:2025-02-13 15:23:07.416270289 +0000 UTC m=+2.600181050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.20.64,}" Feb 13 15:23:07.448529 kubelet[2426]: W0213 15:23:07.448441 2426 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.20.64" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 15:23:07.448529 kubelet[2426]: E0213 15:23:07.448502 2426 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.20.64\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 15:23:07.448698 kubelet[2426]: W0213 15:23:07.448655 2426 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 15:23:07.448698 kubelet[2426]: E0213 15:23:07.448683 2426 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 15:23:07.453095 kubelet[2426]: I0213 15:23:07.453040 2426 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:23:07.471323 kubelet[2426]: E0213 15:23:07.470389 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.20.64\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 15:23:07.471500 kubelet[2426]: E0213 15:23:07.471291 2426 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.20.64.1823cdd30d4605f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.20.64,UID:172.31.20.64,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.20.64,},FirstTimestamp:2025-02-13 15:23:07.439629813 +0000 UTC m=+2.623540574,LastTimestamp:2025-02-13 15:23:07.439629813 +0000 UTC m=+2.623540574,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.20.64,}" Feb 13 15:23:07.471761 kubelet[2426]: W0213 15:23:07.471562 2426 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 15:23:07.471761 kubelet[2426]: E0213 15:23:07.471601 2426 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 15:23:07.496361 kubelet[2426]: I0213 15:23:07.496197 2426 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:23:07.496361 kubelet[2426]: I0213 15:23:07.496231 2426 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:23:07.496361 kubelet[2426]: I0213 15:23:07.496271 2426 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:23:07.501712 kubelet[2426]: I0213 15:23:07.501655 2426 policy_none.go:49] "None policy: Start" Feb 13 15:23:07.503833 kubelet[2426]: I0213 15:23:07.503217 2426 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:23:07.503833 kubelet[2426]: I0213 15:23:07.503260 2426 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:23:07.514154 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:23:07.531581 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:23:07.535366 kubelet[2426]: E0213 15:23:07.535330 2426 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.20.64\" not found" Feb 13 15:23:07.542572 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:23:07.545520 kubelet[2426]: I0213 15:23:07.545462 2426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:23:07.549168 kubelet[2426]: I0213 15:23:07.549125 2426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:23:07.549358 kubelet[2426]: I0213 15:23:07.549338 2426 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:23:07.549683 kubelet[2426]: I0213 15:23:07.549662 2426 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:23:07.550079 kubelet[2426]: E0213 15:23:07.550039 2426 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:23:07.555968 kubelet[2426]: I0213 15:23:07.555159 2426 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:23:07.555968 kubelet[2426]: I0213 15:23:07.555445 2426 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:23:07.555968 kubelet[2426]: I0213 15:23:07.555465 2426 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:23:07.558847 kubelet[2426]: I0213 15:23:07.558793 2426 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:23:07.563124 kubelet[2426]: E0213 15:23:07.563070 2426 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.20.64\" not found" Feb 13 15:23:07.657745 kubelet[2426]: I0213 15:23:07.657501 2426 kubelet_node_status.go:72] "Attempting to register node" node="172.31.20.64" Feb 13 15:23:07.678990 kubelet[2426]: I0213 15:23:07.678280 2426 kubelet_node_status.go:75] "Successfully registered node" node="172.31.20.64" Feb 13 15:23:07.678990 kubelet[2426]: E0213 15:23:07.678340 2426 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.20.64\": node \"172.31.20.64\" not found" Feb 13 15:23:07.717389 kubelet[2426]: E0213 15:23:07.717317 2426 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.20.64\" not found" Feb 13 15:23:07.817779 kubelet[2426]: E0213 15:23:07.817732 2426 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.20.64\" not found" Feb 13 15:23:07.898233 sudo[2290]: pam_unix(sudo:session): session closed for user root Feb 13 15:23:07.918905 kubelet[2426]: E0213 15:23:07.918760 2426 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.20.64\" not found" Feb 13 15:23:07.920314 sshd[2289]: Connection closed by 147.75.109.163 port 43620 Feb 13 15:23:07.921137 sshd-session[2287]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:07.926999 systemd[1]: sshd@8-172.31.20.64:22-147.75.109.163:43620.service: Deactivated successfully. Feb 13 15:23:07.930974 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:23:07.932686 systemd-logind[1925]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:23:07.934432 systemd-logind[1925]: Removed session 9. Feb 13 15:23:08.019572 kubelet[2426]: E0213 15:23:08.019495 2426 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.20.64\" not found" Feb 13 15:23:08.120270 kubelet[2426]: E0213 15:23:08.120224 2426 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.20.64\" not found" Feb 13 15:23:08.221030 kubelet[2426]: E0213 15:23:08.220923 2426 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.20.64\" not found" Feb 13 15:23:08.321747 kubelet[2426]: E0213 15:23:08.321682 2426 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.20.64\" not found" Feb 13 15:23:08.332141 kubelet[2426]: I0213 15:23:08.331908 2426 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 15:23:08.332141 kubelet[2426]: W0213 15:23:08.332104 2426 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 15:23:08.411124 kubelet[2426]: E0213 15:23:08.411044 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:08.422494 kubelet[2426]: E0213 15:23:08.422440 2426 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.20.64\" not found" Feb 13 15:23:08.524017 kubelet[2426]: E0213 15:23:08.523398 2426 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.20.64\" not found" Feb 13 15:23:08.625124 kubelet[2426]: I0213 15:23:08.625068 2426 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 15:23:08.626122 containerd[1944]: time="2025-02-13T15:23:08.626034995Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:23:08.626681 kubelet[2426]: I0213 15:23:08.626470 2426 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 15:23:09.410457 kubelet[2426]: I0213 15:23:09.410403 2426 apiserver.go:52] "Watching apiserver" Feb 13 15:23:09.411975 kubelet[2426]: E0213 15:23:09.411942 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:09.415379 kubelet[2426]: E0213 15:23:09.414353 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gw5tb" podUID="ae81791c-2e14-4b2d-805b-1a1db95301cc" Feb 13 15:23:09.427311 systemd[1]: Created slice kubepods-besteffort-podcb51a8f1_f230_4817_aac4_35ea2be675d3.slice - libcontainer container kubepods-besteffort-podcb51a8f1_f230_4817_aac4_35ea2be675d3.slice. Feb 13 15:23:09.439521 kubelet[2426]: I0213 15:23:09.439463 2426 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:23:09.446373 systemd[1]: Created slice kubepods-besteffort-podccd769fb_36c6_4f0b_8d66_2c085e4d790f.slice - libcontainer container kubepods-besteffort-podccd769fb_36c6_4f0b_8d66_2c085e4d790f.slice. Feb 13 15:23:09.450408 kubelet[2426]: I0213 15:23:09.450315 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cb51a8f1-f230-4817-aac4-35ea2be675d3-flexvol-driver-host\") pod \"calico-node-m2bff\" (UID: \"cb51a8f1-f230-4817-aac4-35ea2be675d3\") " pod="calico-system/calico-node-m2bff" Feb 13 15:23:09.450408 kubelet[2426]: I0213 15:23:09.450377 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae81791c-2e14-4b2d-805b-1a1db95301cc-kubelet-dir\") pod \"csi-node-driver-gw5tb\" (UID: \"ae81791c-2e14-4b2d-805b-1a1db95301cc\") " pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:09.450602 kubelet[2426]: I0213 15:23:09.450419 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ae81791c-2e14-4b2d-805b-1a1db95301cc-socket-dir\") pod \"csi-node-driver-gw5tb\" (UID: \"ae81791c-2e14-4b2d-805b-1a1db95301cc\") " pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:09.450602 kubelet[2426]: I0213 15:23:09.450455 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ae81791c-2e14-4b2d-805b-1a1db95301cc-registration-dir\") pod \"csi-node-driver-gw5tb\" (UID: \"ae81791c-2e14-4b2d-805b-1a1db95301cc\") " pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:09.450602 kubelet[2426]: I0213 15:23:09.450495 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ccd769fb-36c6-4f0b-8d66-2c085e4d790f-kube-proxy\") pod \"kube-proxy-r62f4\" (UID: \"ccd769fb-36c6-4f0b-8d66-2c085e4d790f\") " pod="kube-system/kube-proxy-r62f4" Feb 13 15:23:09.450602 kubelet[2426]: I0213 15:23:09.450530 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccd769fb-36c6-4f0b-8d66-2c085e4d790f-xtables-lock\") pod \"kube-proxy-r62f4\" (UID: \"ccd769fb-36c6-4f0b-8d66-2c085e4d790f\") " pod="kube-system/kube-proxy-r62f4" Feb 13 15:23:09.450602 kubelet[2426]: I0213 15:23:09.450580 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cb51a8f1-f230-4817-aac4-35ea2be675d3-var-lib-calico\") pod \"calico-node-m2bff\" (UID: \"cb51a8f1-f230-4817-aac4-35ea2be675d3\") " pod="calico-system/calico-node-m2bff" Feb 13 15:23:09.450843 kubelet[2426]: I0213 15:23:09.450615 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cb51a8f1-f230-4817-aac4-35ea2be675d3-cni-net-dir\") pod \"calico-node-m2bff\" (UID: \"cb51a8f1-f230-4817-aac4-35ea2be675d3\") " pod="calico-system/calico-node-m2bff" Feb 13 15:23:09.450843 kubelet[2426]: I0213 15:23:09.450661 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccd769fb-36c6-4f0b-8d66-2c085e4d790f-lib-modules\") pod \"kube-proxy-r62f4\" (UID: \"ccd769fb-36c6-4f0b-8d66-2c085e4d790f\") " pod="kube-system/kube-proxy-r62f4" Feb 13 15:23:09.450843 kubelet[2426]: I0213 15:23:09.450696 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mllnc\" (UniqueName: \"kubernetes.io/projected/ccd769fb-36c6-4f0b-8d66-2c085e4d790f-kube-api-access-mllnc\") pod \"kube-proxy-r62f4\" (UID: \"ccd769fb-36c6-4f0b-8d66-2c085e4d790f\") " pod="kube-system/kube-proxy-r62f4" Feb 13 15:23:09.450843 kubelet[2426]: I0213 15:23:09.450729 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb51a8f1-f230-4817-aac4-35ea2be675d3-lib-modules\") pod \"calico-node-m2bff\" (UID: \"cb51a8f1-f230-4817-aac4-35ea2be675d3\") " pod="calico-system/calico-node-m2bff" Feb 13 15:23:09.450843 kubelet[2426]: I0213 15:23:09.450762 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cb51a8f1-f230-4817-aac4-35ea2be675d3-var-run-calico\") pod \"calico-node-m2bff\" (UID: \"cb51a8f1-f230-4817-aac4-35ea2be675d3\") " pod="calico-system/calico-node-m2bff" Feb 13 15:23:09.451115 kubelet[2426]: I0213 15:23:09.450795 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cb51a8f1-f230-4817-aac4-35ea2be675d3-cni-log-dir\") pod \"calico-node-m2bff\" (UID: \"cb51a8f1-f230-4817-aac4-35ea2be675d3\") " pod="calico-system/calico-node-m2bff" Feb 13 15:23:09.451115 kubelet[2426]: I0213 15:23:09.450845 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ae81791c-2e14-4b2d-805b-1a1db95301cc-varrun\") pod \"csi-node-driver-gw5tb\" (UID: \"ae81791c-2e14-4b2d-805b-1a1db95301cc\") " pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:09.451115 kubelet[2426]: I0213 15:23:09.450928 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb6bg\" (UniqueName: \"kubernetes.io/projected/ae81791c-2e14-4b2d-805b-1a1db95301cc-kube-api-access-kb6bg\") pod \"csi-node-driver-gw5tb\" (UID: \"ae81791c-2e14-4b2d-805b-1a1db95301cc\") " pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:09.451115 kubelet[2426]: I0213 15:23:09.450967 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb51a8f1-f230-4817-aac4-35ea2be675d3-xtables-lock\") pod \"calico-node-m2bff\" (UID: \"cb51a8f1-f230-4817-aac4-35ea2be675d3\") " pod="calico-system/calico-node-m2bff" Feb 13 15:23:09.451115 kubelet[2426]: I0213 15:23:09.451026 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cb51a8f1-f230-4817-aac4-35ea2be675d3-node-certs\") pod \"calico-node-m2bff\" (UID: \"cb51a8f1-f230-4817-aac4-35ea2be675d3\") " pod="calico-system/calico-node-m2bff" Feb 13 15:23:09.451350 kubelet[2426]: I0213 15:23:09.451069 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cb51a8f1-f230-4817-aac4-35ea2be675d3-cni-bin-dir\") pod \"calico-node-m2bff\" (UID: \"cb51a8f1-f230-4817-aac4-35ea2be675d3\") " pod="calico-system/calico-node-m2bff" Feb 13 15:23:09.451350 kubelet[2426]: I0213 15:23:09.451105 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p78kz\" (UniqueName: \"kubernetes.io/projected/cb51a8f1-f230-4817-aac4-35ea2be675d3-kube-api-access-p78kz\") pod \"calico-node-m2bff\" (UID: \"cb51a8f1-f230-4817-aac4-35ea2be675d3\") " pod="calico-system/calico-node-m2bff" Feb 13 15:23:09.451350 kubelet[2426]: I0213 15:23:09.451148 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cb51a8f1-f230-4817-aac4-35ea2be675d3-policysync\") pod \"calico-node-m2bff\" (UID: \"cb51a8f1-f230-4817-aac4-35ea2be675d3\") " pod="calico-system/calico-node-m2bff" Feb 13 15:23:09.451350 kubelet[2426]: I0213 15:23:09.451183 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb51a8f1-f230-4817-aac4-35ea2be675d3-tigera-ca-bundle\") pod \"calico-node-m2bff\" (UID: \"cb51a8f1-f230-4817-aac4-35ea2be675d3\") " pod="calico-system/calico-node-m2bff" Feb 13 15:23:09.559424 kubelet[2426]: E0213 15:23:09.559367 2426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:23:09.559424 kubelet[2426]: W0213 15:23:09.559401 2426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:23:09.559424 kubelet[2426]: E0213 15:23:09.559445 2426 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:23:09.560139 kubelet[2426]: E0213 15:23:09.559858 2426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:23:09.560139 kubelet[2426]: W0213 15:23:09.559904 2426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:23:09.560139 kubelet[2426]: E0213 15:23:09.559945 2426 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:23:09.560515 kubelet[2426]: E0213 15:23:09.560255 2426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:23:09.560515 kubelet[2426]: W0213 15:23:09.560272 2426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:23:09.560515 kubelet[2426]: E0213 15:23:09.560304 2426 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:23:09.561248 kubelet[2426]: E0213 15:23:09.560967 2426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:23:09.561248 kubelet[2426]: W0213 15:23:09.560994 2426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:23:09.561248 kubelet[2426]: E0213 15:23:09.561034 2426 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:23:09.561625 kubelet[2426]: E0213 15:23:09.561324 2426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:23:09.561625 kubelet[2426]: W0213 15:23:09.561341 2426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:23:09.561625 kubelet[2426]: E0213 15:23:09.561375 2426 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:23:09.561800 kubelet[2426]: E0213 15:23:09.561648 2426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:23:09.561800 kubelet[2426]: W0213 15:23:09.561663 2426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:23:09.561800 kubelet[2426]: E0213 15:23:09.561699 2426 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:23:09.562277 kubelet[2426]: E0213 15:23:09.561978 2426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:23:09.562277 kubelet[2426]: W0213 15:23:09.562005 2426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:23:09.562277 kubelet[2426]: E0213 15:23:09.562068 2426 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:23:09.562761 kubelet[2426]: E0213 15:23:09.562343 2426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:23:09.562761 kubelet[2426]: W0213 15:23:09.562362 2426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:23:09.562761 kubelet[2426]: E0213 15:23:09.562423 2426 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:23:09.562761 kubelet[2426]: E0213 15:23:09.562668 2426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:23:09.562761 kubelet[2426]: W0213 15:23:09.562684 2426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:23:09.563230 kubelet[2426]: E0213 15:23:09.563012 2426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:23:09.563230 kubelet[2426]: W0213 15:23:09.563039 2426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:23:09.563230 kubelet[2426]: E0213 15:23:09.563062 2426 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:23:09.563230 kubelet[2426]: E0213 15:23:09.563097 2426 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:23:09.564082 kubelet[2426]: E0213 15:23:09.563990 2426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:23:09.564082 kubelet[2426]: W0213 15:23:09.564011 2426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:23:09.564082 kubelet[2426]: E0213 15:23:09.564038 2426 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:23:09.565019 kubelet[2426]: E0213 15:23:09.564422 2426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:23:09.565019 kubelet[2426]: W0213 15:23:09.564452 2426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:23:09.565019 kubelet[2426]: E0213 15:23:09.564478 2426 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:23:09.587442 kubelet[2426]: E0213 15:23:09.587009 2426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:23:09.587442 kubelet[2426]: W0213 15:23:09.587042 2426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:23:09.587442 kubelet[2426]: E0213 15:23:09.587099 2426 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:23:09.591331 kubelet[2426]: E0213 15:23:09.591057 2426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:23:09.591331 kubelet[2426]: W0213 15:23:09.591089 2426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:23:09.591331 kubelet[2426]: E0213 15:23:09.591124 2426 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:23:09.591794 kubelet[2426]: E0213 15:23:09.591699 2426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:23:09.591794 kubelet[2426]: W0213 15:23:09.591720 2426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:23:09.591794 kubelet[2426]: E0213 15:23:09.591755 2426 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:23:09.593047 kubelet[2426]: E0213 15:23:09.593007 2426 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:23:09.593047 kubelet[2426]: W0213 15:23:09.593040 2426 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:23:09.593174 kubelet[2426]: E0213 15:23:09.593071 2426 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:23:09.745270 containerd[1944]: time="2025-02-13T15:23:09.743822244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m2bff,Uid:cb51a8f1-f230-4817-aac4-35ea2be675d3,Namespace:calico-system,Attempt:0,}" Feb 13 15:23:09.754239 containerd[1944]: time="2025-02-13T15:23:09.754168500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r62f4,Uid:ccd769fb-36c6-4f0b-8d66-2c085e4d790f,Namespace:kube-system,Attempt:0,}" Feb 13 15:23:10.336763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2927678719.mount: Deactivated successfully. Feb 13 15:23:10.346296 containerd[1944]: time="2025-02-13T15:23:10.345968339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:23:10.348527 containerd[1944]: time="2025-02-13T15:23:10.348452363Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 15:23:10.351872 containerd[1944]: time="2025-02-13T15:23:10.351813467Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:23:10.353739 containerd[1944]: time="2025-02-13T15:23:10.353359079Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:23:10.353739 containerd[1944]: time="2025-02-13T15:23:10.353669459Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:23:10.364082 containerd[1944]: time="2025-02-13T15:23:10.364024703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:23:10.368250 containerd[1944]: time="2025-02-13T15:23:10.368182883Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 613.858539ms" Feb 13 15:23:10.371552 containerd[1944]: time="2025-02-13T15:23:10.371206823Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 627.201747ms" Feb 13 15:23:10.413549 kubelet[2426]: E0213 15:23:10.413447 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:10.543764 containerd[1944]: time="2025-02-13T15:23:10.543493920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:23:10.544074 containerd[1944]: time="2025-02-13T15:23:10.543618744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:23:10.544074 containerd[1944]: time="2025-02-13T15:23:10.543655872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:10.544074 containerd[1944]: time="2025-02-13T15:23:10.543804204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:10.550215 kubelet[2426]: E0213 15:23:10.550151 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gw5tb" podUID="ae81791c-2e14-4b2d-805b-1a1db95301cc" Feb 13 15:23:10.667673 containerd[1944]: time="2025-02-13T15:23:10.667431877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:23:10.667998 containerd[1944]: time="2025-02-13T15:23:10.667552249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:23:10.669164 containerd[1944]: time="2025-02-13T15:23:10.668991481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:10.670211 containerd[1944]: time="2025-02-13T15:23:10.670012441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:10.760212 systemd[1]: Started cri-containerd-4ef5350f30e7912961341d753e4511f0cb3cca0f95cd2de28909b26e26c83866.scope - libcontainer container 4ef5350f30e7912961341d753e4511f0cb3cca0f95cd2de28909b26e26c83866. Feb 13 15:23:10.777238 systemd[1]: Started cri-containerd-89f04bb3dec519a444596cbf21ffe0cb64424fdce1872ab2b0b6ad7588f4ec57.scope - libcontainer container 89f04bb3dec519a444596cbf21ffe0cb64424fdce1872ab2b0b6ad7588f4ec57. Feb 13 15:23:10.840089 containerd[1944]: time="2025-02-13T15:23:10.839857310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m2bff,Uid:cb51a8f1-f230-4817-aac4-35ea2be675d3,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ef5350f30e7912961341d753e4511f0cb3cca0f95cd2de28909b26e26c83866\"" Feb 13 15:23:10.844518 containerd[1944]: time="2025-02-13T15:23:10.844439294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:23:10.849922 containerd[1944]: time="2025-02-13T15:23:10.849790178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r62f4,Uid:ccd769fb-36c6-4f0b-8d66-2c085e4d790f,Namespace:kube-system,Attempt:0,} returns sandbox id \"89f04bb3dec519a444596cbf21ffe0cb64424fdce1872ab2b0b6ad7588f4ec57\"" Feb 13 15:23:11.414245 kubelet[2426]: E0213 15:23:11.414184 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:12.228257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1644296922.mount: Deactivated successfully. Feb 13 15:23:12.350832 containerd[1944]: time="2025-02-13T15:23:12.350549557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:12.352029 containerd[1944]: time="2025-02-13T15:23:12.351944377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Feb 13 15:23:12.353151 containerd[1944]: time="2025-02-13T15:23:12.353066725Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:12.356923 containerd[1944]: time="2025-02-13T15:23:12.356798437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:12.358586 containerd[1944]: time="2025-02-13T15:23:12.358343893Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.513848655s" Feb 13 15:23:12.358586 containerd[1944]: time="2025-02-13T15:23:12.358397005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 15:23:12.361579 containerd[1944]: time="2025-02-13T15:23:12.361047841Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 15:23:12.363384 containerd[1944]: time="2025-02-13T15:23:12.362941153Z" level=info msg="CreateContainer within sandbox \"4ef5350f30e7912961341d753e4511f0cb3cca0f95cd2de28909b26e26c83866\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:23:12.382842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3505026866.mount: Deactivated successfully. Feb 13 15:23:12.390840 containerd[1944]: time="2025-02-13T15:23:12.390620774Z" level=info msg="CreateContainer within sandbox \"4ef5350f30e7912961341d753e4511f0cb3cca0f95cd2de28909b26e26c83866\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cf19b506b2fd6cb71f61b10a4a8d173ed7ee5dd710caa7132c10014afcf3ff99\"" Feb 13 15:23:12.391993 containerd[1944]: time="2025-02-13T15:23:12.391937414Z" level=info msg="StartContainer for \"cf19b506b2fd6cb71f61b10a4a8d173ed7ee5dd710caa7132c10014afcf3ff99\"" Feb 13 15:23:12.415234 kubelet[2426]: E0213 15:23:12.415117 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:12.448183 systemd[1]: Started cri-containerd-cf19b506b2fd6cb71f61b10a4a8d173ed7ee5dd710caa7132c10014afcf3ff99.scope - libcontainer container cf19b506b2fd6cb71f61b10a4a8d173ed7ee5dd710caa7132c10014afcf3ff99. Feb 13 15:23:12.503298 containerd[1944]: time="2025-02-13T15:23:12.503157830Z" level=info msg="StartContainer for \"cf19b506b2fd6cb71f61b10a4a8d173ed7ee5dd710caa7132c10014afcf3ff99\" returns successfully" Feb 13 15:23:12.526168 systemd[1]: cri-containerd-cf19b506b2fd6cb71f61b10a4a8d173ed7ee5dd710caa7132c10014afcf3ff99.scope: Deactivated successfully. Feb 13 15:23:12.550806 kubelet[2426]: E0213 15:23:12.550747 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gw5tb" podUID="ae81791c-2e14-4b2d-805b-1a1db95301cc" Feb 13 15:23:12.616188 containerd[1944]: time="2025-02-13T15:23:12.616103835Z" level=info msg="shim disconnected" id=cf19b506b2fd6cb71f61b10a4a8d173ed7ee5dd710caa7132c10014afcf3ff99 namespace=k8s.io Feb 13 15:23:12.616188 containerd[1944]: time="2025-02-13T15:23:12.616177587Z" level=warning msg="cleaning up after shim disconnected" id=cf19b506b2fd6cb71f61b10a4a8d173ed7ee5dd710caa7132c10014afcf3ff99 namespace=k8s.io Feb 13 15:23:12.616651 containerd[1944]: time="2025-02-13T15:23:12.616197783Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:13.190771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf19b506b2fd6cb71f61b10a4a8d173ed7ee5dd710caa7132c10014afcf3ff99-rootfs.mount: Deactivated successfully. Feb 13 15:23:13.415532 kubelet[2426]: E0213 15:23:13.415456 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:13.717654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3207318995.mount: Deactivated successfully. Feb 13 15:23:14.264836 containerd[1944]: time="2025-02-13T15:23:14.264754695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:14.266424 containerd[1944]: time="2025-02-13T15:23:14.266351547Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769256" Feb 13 15:23:14.268155 containerd[1944]: time="2025-02-13T15:23:14.267948171Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:14.271699 containerd[1944]: time="2025-02-13T15:23:14.271591491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:14.273304 containerd[1944]: time="2025-02-13T15:23:14.273100863Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.912000414s" Feb 13 15:23:14.273304 containerd[1944]: time="2025-02-13T15:23:14.273158907Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 15:23:14.276648 containerd[1944]: time="2025-02-13T15:23:14.276340107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:23:14.278227 containerd[1944]: time="2025-02-13T15:23:14.278177739Z" level=info msg="CreateContainer within sandbox \"89f04bb3dec519a444596cbf21ffe0cb64424fdce1872ab2b0b6ad7588f4ec57\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:23:14.298491 containerd[1944]: time="2025-02-13T15:23:14.298325307Z" level=info msg="CreateContainer within sandbox \"89f04bb3dec519a444596cbf21ffe0cb64424fdce1872ab2b0b6ad7588f4ec57\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d55d54ab9a4adb704f6e152f8a97624ecd086c9850f63d7ad59e6bc406877487\"" Feb 13 15:23:14.299620 containerd[1944]: time="2025-02-13T15:23:14.299548383Z" level=info msg="StartContainer for \"d55d54ab9a4adb704f6e152f8a97624ecd086c9850f63d7ad59e6bc406877487\"" Feb 13 15:23:14.353252 systemd[1]: Started cri-containerd-d55d54ab9a4adb704f6e152f8a97624ecd086c9850f63d7ad59e6bc406877487.scope - libcontainer container d55d54ab9a4adb704f6e152f8a97624ecd086c9850f63d7ad59e6bc406877487. Feb 13 15:23:14.406947 containerd[1944]: time="2025-02-13T15:23:14.406774480Z" level=info msg="StartContainer for \"d55d54ab9a4adb704f6e152f8a97624ecd086c9850f63d7ad59e6bc406877487\" returns successfully" Feb 13 15:23:14.415945 kubelet[2426]: E0213 15:23:14.415821 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:14.550985 kubelet[2426]: E0213 15:23:14.550802 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gw5tb" podUID="ae81791c-2e14-4b2d-805b-1a1db95301cc" Feb 13 15:23:14.697162 kubelet[2426]: I0213 15:23:14.697054 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r62f4" podStartSLOduration=4.27474954 podStartE2EDuration="7.697034105s" podCreationTimestamp="2025-02-13 15:23:07 +0000 UTC" firstStartedPulling="2025-02-13 15:23:10.853188878 +0000 UTC m=+6.037099651" lastFinishedPulling="2025-02-13 15:23:14.275473467 +0000 UTC m=+9.459384216" observedRunningTime="2025-02-13 15:23:14.696692981 +0000 UTC m=+9.880603766" watchObservedRunningTime="2025-02-13 15:23:14.697034105 +0000 UTC m=+9.880944866" Feb 13 15:23:15.416502 kubelet[2426]: E0213 15:23:15.416438 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:16.417092 kubelet[2426]: E0213 15:23:16.416854 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:16.552536 kubelet[2426]: E0213 15:23:16.552455 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gw5tb" podUID="ae81791c-2e14-4b2d-805b-1a1db95301cc" Feb 13 15:23:17.417202 kubelet[2426]: E0213 15:23:17.417108 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:17.946650 containerd[1944]: time="2025-02-13T15:23:17.946567353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:17.949680 containerd[1944]: time="2025-02-13T15:23:17.949563705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 15:23:17.955117 containerd[1944]: time="2025-02-13T15:23:17.955032861Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:17.964034 containerd[1944]: time="2025-02-13T15:23:17.963979629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:17.966661 containerd[1944]: time="2025-02-13T15:23:17.966338805Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.689940258s" Feb 13 15:23:17.966661 containerd[1944]: time="2025-02-13T15:23:17.966395037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 15:23:17.975288 containerd[1944]: time="2025-02-13T15:23:17.973254021Z" level=info msg="CreateContainer within sandbox \"4ef5350f30e7912961341d753e4511f0cb3cca0f95cd2de28909b26e26c83866\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:23:18.016009 containerd[1944]: time="2025-02-13T15:23:18.015924437Z" level=info msg="CreateContainer within sandbox \"4ef5350f30e7912961341d753e4511f0cb3cca0f95cd2de28909b26e26c83866\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d92369a31acf7d5c1939164a34e6e227b928b958f3bc55bfb563763ff9c2dead\"" Feb 13 15:23:18.016850 containerd[1944]: time="2025-02-13T15:23:18.016791245Z" level=info msg="StartContainer for \"d92369a31acf7d5c1939164a34e6e227b928b958f3bc55bfb563763ff9c2dead\"" Feb 13 15:23:18.074196 systemd[1]: Started cri-containerd-d92369a31acf7d5c1939164a34e6e227b928b958f3bc55bfb563763ff9c2dead.scope - libcontainer container d92369a31acf7d5c1939164a34e6e227b928b958f3bc55bfb563763ff9c2dead. Feb 13 15:23:18.132048 containerd[1944]: time="2025-02-13T15:23:18.131834286Z" level=info msg="StartContainer for \"d92369a31acf7d5c1939164a34e6e227b928b958f3bc55bfb563763ff9c2dead\" returns successfully" Feb 13 15:23:18.166993 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 15:23:18.417388 kubelet[2426]: E0213 15:23:18.417317 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:18.551309 kubelet[2426]: E0213 15:23:18.551232 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gw5tb" podUID="ae81791c-2e14-4b2d-805b-1a1db95301cc" Feb 13 15:23:19.004561 containerd[1944]: time="2025-02-13T15:23:19.004472598Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:23:19.008558 systemd[1]: cri-containerd-d92369a31acf7d5c1939164a34e6e227b928b958f3bc55bfb563763ff9c2dead.scope: Deactivated successfully. Feb 13 15:23:19.044581 kubelet[2426]: I0213 15:23:19.044321 2426 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 15:23:19.057522 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d92369a31acf7d5c1939164a34e6e227b928b958f3bc55bfb563763ff9c2dead-rootfs.mount: Deactivated successfully. Feb 13 15:23:19.418252 kubelet[2426]: E0213 15:23:19.418155 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:20.359214 systemd[1]: Created slice kubepods-besteffort-pod5bc7496e_9dfe_4d8f_91c8_e9eb119f3241.slice - libcontainer container kubepods-besteffort-pod5bc7496e_9dfe_4d8f_91c8_e9eb119f3241.slice. Feb 13 15:23:20.365424 containerd[1944]: time="2025-02-13T15:23:20.365339577Z" level=info msg="shim disconnected" id=d92369a31acf7d5c1939164a34e6e227b928b958f3bc55bfb563763ff9c2dead namespace=k8s.io Feb 13 15:23:20.365424 containerd[1944]: time="2025-02-13T15:23:20.365417193Z" level=warning msg="cleaning up after shim disconnected" id=d92369a31acf7d5c1939164a34e6e227b928b958f3bc55bfb563763ff9c2dead namespace=k8s.io Feb 13 15:23:20.366247 containerd[1944]: time="2025-02-13T15:23:20.365438397Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:20.419029 kubelet[2426]: E0213 15:23:20.418949 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:20.512564 kubelet[2426]: I0213 15:23:20.512468 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82l66\" (UniqueName: \"kubernetes.io/projected/5bc7496e-9dfe-4d8f-91c8-e9eb119f3241-kube-api-access-82l66\") pod \"nginx-deployment-8587fbcb89-8fn8f\" (UID: \"5bc7496e-9dfe-4d8f-91c8-e9eb119f3241\") " pod="default/nginx-deployment-8587fbcb89-8fn8f" Feb 13 15:23:20.559537 systemd[1]: Created slice kubepods-besteffort-podae81791c_2e14_4b2d_805b_1a1db95301cc.slice - libcontainer container kubepods-besteffort-podae81791c_2e14_4b2d_805b_1a1db95301cc.slice. Feb 13 15:23:20.564334 containerd[1944]: time="2025-02-13T15:23:20.564181462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw5tb,Uid:ae81791c-2e14-4b2d-805b-1a1db95301cc,Namespace:calico-system,Attempt:0,}" Feb 13 15:23:20.666940 containerd[1944]: time="2025-02-13T15:23:20.666318287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-8fn8f,Uid:5bc7496e-9dfe-4d8f-91c8-e9eb119f3241,Namespace:default,Attempt:0,}" Feb 13 15:23:20.713772 containerd[1944]: time="2025-02-13T15:23:20.713720327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:23:20.714364 containerd[1944]: time="2025-02-13T15:23:20.714286571Z" level=error msg="Failed to destroy network for sandbox \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:20.715285 containerd[1944]: time="2025-02-13T15:23:20.715233047Z" level=error msg="encountered an error cleaning up failed sandbox \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:20.717203 containerd[1944]: time="2025-02-13T15:23:20.717043319Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw5tb,Uid:ae81791c-2e14-4b2d-805b-1a1db95301cc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:20.717601 kubelet[2426]: E0213 15:23:20.717540 2426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:20.717714 kubelet[2426]: E0213 15:23:20.717636 2426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:20.717714 kubelet[2426]: E0213 15:23:20.717689 2426 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:20.717846 kubelet[2426]: E0213 15:23:20.717749 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gw5tb_calico-system(ae81791c-2e14-4b2d-805b-1a1db95301cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gw5tb_calico-system(ae81791c-2e14-4b2d-805b-1a1db95301cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gw5tb" podUID="ae81791c-2e14-4b2d-805b-1a1db95301cc" Feb 13 15:23:20.719104 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422-shm.mount: Deactivated successfully. Feb 13 15:23:20.790291 containerd[1944]: time="2025-02-13T15:23:20.790195067Z" level=error msg="Failed to destroy network for sandbox \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:20.790868 containerd[1944]: time="2025-02-13T15:23:20.790803815Z" level=error msg="encountered an error cleaning up failed sandbox \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:20.790992 containerd[1944]: time="2025-02-13T15:23:20.790934483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-8fn8f,Uid:5bc7496e-9dfe-4d8f-91c8-e9eb119f3241,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:20.791453 kubelet[2426]: E0213 15:23:20.791360 2426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:20.791537 kubelet[2426]: E0213 15:23:20.791495 2426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-8fn8f" Feb 13 15:23:20.791635 kubelet[2426]: E0213 15:23:20.791533 2426 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-8fn8f" Feb 13 15:23:20.791729 kubelet[2426]: E0213 15:23:20.791625 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-8fn8f_default(5bc7496e-9dfe-4d8f-91c8-e9eb119f3241)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-8fn8f_default(5bc7496e-9dfe-4d8f-91c8-e9eb119f3241)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-8fn8f" podUID="5bc7496e-9dfe-4d8f-91c8-e9eb119f3241" Feb 13 15:23:21.419466 kubelet[2426]: E0213 15:23:21.419401 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:21.576219 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606-shm.mount: Deactivated successfully. Feb 13 15:23:21.712250 kubelet[2426]: I0213 15:23:21.712117 2426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606" Feb 13 15:23:21.714918 containerd[1944]: time="2025-02-13T15:23:21.714698808Z" level=info msg="StopPodSandbox for \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\"" Feb 13 15:23:21.717221 kubelet[2426]: I0213 15:23:21.716638 2426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422" Feb 13 15:23:21.718532 containerd[1944]: time="2025-02-13T15:23:21.716796276Z" level=info msg="Ensure that sandbox 79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606 in task-service has been cleanup successfully" Feb 13 15:23:21.718532 containerd[1944]: time="2025-02-13T15:23:21.717687480Z" level=info msg="StopPodSandbox for \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\"" Feb 13 15:23:21.718532 containerd[1944]: time="2025-02-13T15:23:21.718127088Z" level=info msg="Ensure that sandbox 03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422 in task-service has been cleanup successfully" Feb 13 15:23:21.719132 containerd[1944]: time="2025-02-13T15:23:21.719073348Z" level=info msg="TearDown network for sandbox \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\" successfully" Feb 13 15:23:21.719445 containerd[1944]: time="2025-02-13T15:23:21.719278560Z" level=info msg="StopPodSandbox for \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\" returns successfully" Feb 13 15:23:21.720258 containerd[1944]: time="2025-02-13T15:23:21.720064140Z" level=info msg="TearDown network for sandbox \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\" successfully" Feb 13 15:23:21.720258 containerd[1944]: time="2025-02-13T15:23:21.720110268Z" level=info msg="StopPodSandbox for \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\" returns successfully" Feb 13 15:23:21.723869 systemd[1]: run-netns-cni\x2d91d81fd0\x2d67ae\x2db183\x2d9893\x2d29750592ed7b.mount: Deactivated successfully. Feb 13 15:23:21.724465 containerd[1944]: time="2025-02-13T15:23:21.722906796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw5tb,Uid:ae81791c-2e14-4b2d-805b-1a1db95301cc,Namespace:calico-system,Attempt:1,}" Feb 13 15:23:21.725302 containerd[1944]: time="2025-02-13T15:23:21.725231796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-8fn8f,Uid:5bc7496e-9dfe-4d8f-91c8-e9eb119f3241,Namespace:default,Attempt:1,}" Feb 13 15:23:21.725695 systemd[1]: run-netns-cni\x2d5c21a359\x2de02c\x2dcead\x2d3aa9\x2d71646b52397b.mount: Deactivated successfully. Feb 13 15:23:21.903633 containerd[1944]: time="2025-02-13T15:23:21.903429685Z" level=error msg="Failed to destroy network for sandbox \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:21.906514 containerd[1944]: time="2025-02-13T15:23:21.906253561Z" level=error msg="encountered an error cleaning up failed sandbox \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:21.906514 containerd[1944]: time="2025-02-13T15:23:21.906367585Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-8fn8f,Uid:5bc7496e-9dfe-4d8f-91c8-e9eb119f3241,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:21.907435 kubelet[2426]: E0213 15:23:21.907023 2426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:21.907435 kubelet[2426]: E0213 15:23:21.907352 2426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-8fn8f" Feb 13 15:23:21.907435 kubelet[2426]: E0213 15:23:21.907392 2426 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-8fn8f" Feb 13 15:23:21.908784 kubelet[2426]: E0213 15:23:21.907620 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-8fn8f_default(5bc7496e-9dfe-4d8f-91c8-e9eb119f3241)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-8fn8f_default(5bc7496e-9dfe-4d8f-91c8-e9eb119f3241)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-8fn8f" podUID="5bc7496e-9dfe-4d8f-91c8-e9eb119f3241" Feb 13 15:23:21.913265 containerd[1944]: time="2025-02-13T15:23:21.913205965Z" level=error msg="Failed to destroy network for sandbox \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:21.914080 containerd[1944]: time="2025-02-13T15:23:21.913951561Z" level=error msg="encountered an error cleaning up failed sandbox \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:21.914080 containerd[1944]: time="2025-02-13T15:23:21.914041705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw5tb,Uid:ae81791c-2e14-4b2d-805b-1a1db95301cc,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:21.914957 kubelet[2426]: E0213 15:23:21.914339 2426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:21.914957 kubelet[2426]: E0213 15:23:21.914427 2426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:21.914957 kubelet[2426]: E0213 15:23:21.914471 2426 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:21.915272 kubelet[2426]: E0213 15:23:21.914554 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gw5tb_calico-system(ae81791c-2e14-4b2d-805b-1a1db95301cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gw5tb_calico-system(ae81791c-2e14-4b2d-805b-1a1db95301cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gw5tb" podUID="ae81791c-2e14-4b2d-805b-1a1db95301cc" Feb 13 15:23:22.420479 kubelet[2426]: E0213 15:23:22.420399 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:22.579711 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735-shm.mount: Deactivated successfully. Feb 13 15:23:22.581277 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664-shm.mount: Deactivated successfully. Feb 13 15:23:22.722228 kubelet[2426]: I0213 15:23:22.722076 2426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664" Feb 13 15:23:22.727919 containerd[1944]: time="2025-02-13T15:23:22.723685909Z" level=info msg="StopPodSandbox for \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\"" Feb 13 15:23:22.727919 containerd[1944]: time="2025-02-13T15:23:22.723991993Z" level=info msg="Ensure that sandbox 511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664 in task-service has been cleanup successfully" Feb 13 15:23:22.728463 kubelet[2426]: I0213 15:23:22.727130 2426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735" Feb 13 15:23:22.728535 containerd[1944]: time="2025-02-13T15:23:22.728025817Z" level=info msg="StopPodSandbox for \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\"" Feb 13 15:23:22.728535 containerd[1944]: time="2025-02-13T15:23:22.728298493Z" level=info msg="Ensure that sandbox c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735 in task-service has been cleanup successfully" Feb 13 15:23:22.729968 containerd[1944]: time="2025-02-13T15:23:22.728943241Z" level=info msg="TearDown network for sandbox \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\" successfully" Feb 13 15:23:22.729968 containerd[1944]: time="2025-02-13T15:23:22.728995057Z" level=info msg="StopPodSandbox for \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\" returns successfully" Feb 13 15:23:22.729968 containerd[1944]: time="2025-02-13T15:23:22.729691201Z" level=info msg="StopPodSandbox for \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\"" Feb 13 15:23:22.729968 containerd[1944]: time="2025-02-13T15:23:22.729836365Z" level=info msg="TearDown network for sandbox \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\" successfully" Feb 13 15:23:22.729968 containerd[1944]: time="2025-02-13T15:23:22.729858253Z" level=info msg="StopPodSandbox for \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\" returns successfully" Feb 13 15:23:22.732166 containerd[1944]: time="2025-02-13T15:23:22.731714449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-8fn8f,Uid:5bc7496e-9dfe-4d8f-91c8-e9eb119f3241,Namespace:default,Attempt:2,}" Feb 13 15:23:22.733058 systemd[1]: run-netns-cni\x2dfa865862\x2df9b0\x2d2d90\x2d0305\x2d4b3ab3157b6f.mount: Deactivated successfully. Feb 13 15:23:22.736253 containerd[1944]: time="2025-02-13T15:23:22.735610885Z" level=info msg="TearDown network for sandbox \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\" successfully" Feb 13 15:23:22.736433 containerd[1944]: time="2025-02-13T15:23:22.736397881Z" level=info msg="StopPodSandbox for \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\" returns successfully" Feb 13 15:23:22.737858 systemd[1]: run-netns-cni\x2d35206f1f\x2d3641\x2dd48f\x2d869d\x2d855d712a162c.mount: Deactivated successfully. Feb 13 15:23:22.740338 containerd[1944]: time="2025-02-13T15:23:22.740076265Z" level=info msg="StopPodSandbox for \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\"" Feb 13 15:23:22.740546 containerd[1944]: time="2025-02-13T15:23:22.740308369Z" level=info msg="TearDown network for sandbox \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\" successfully" Feb 13 15:23:22.740546 containerd[1944]: time="2025-02-13T15:23:22.740486713Z" level=info msg="StopPodSandbox for \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\" returns successfully" Feb 13 15:23:22.742539 containerd[1944]: time="2025-02-13T15:23:22.742185325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw5tb,Uid:ae81791c-2e14-4b2d-805b-1a1db95301cc,Namespace:calico-system,Attempt:2,}" Feb 13 15:23:23.076477 containerd[1944]: time="2025-02-13T15:23:23.076292651Z" level=error msg="Failed to destroy network for sandbox \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:23.077477 containerd[1944]: time="2025-02-13T15:23:23.077415887Z" level=error msg="encountered an error cleaning up failed sandbox \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:23.077594 containerd[1944]: time="2025-02-13T15:23:23.077527763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-8fn8f,Uid:5bc7496e-9dfe-4d8f-91c8-e9eb119f3241,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:23.078978 kubelet[2426]: E0213 15:23:23.077849 2426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:23.078978 kubelet[2426]: E0213 15:23:23.077973 2426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-8fn8f" Feb 13 15:23:23.078978 kubelet[2426]: E0213 15:23:23.078011 2426 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-8fn8f" Feb 13 15:23:23.079200 kubelet[2426]: E0213 15:23:23.078094 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-8fn8f_default(5bc7496e-9dfe-4d8f-91c8-e9eb119f3241)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-8fn8f_default(5bc7496e-9dfe-4d8f-91c8-e9eb119f3241)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-8fn8f" podUID="5bc7496e-9dfe-4d8f-91c8-e9eb119f3241" Feb 13 15:23:23.094689 containerd[1944]: time="2025-02-13T15:23:23.094350443Z" level=error msg="Failed to destroy network for sandbox \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:23.095477 containerd[1944]: time="2025-02-13T15:23:23.095418551Z" level=error msg="encountered an error cleaning up failed sandbox \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:23.095640 containerd[1944]: time="2025-02-13T15:23:23.095529167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw5tb,Uid:ae81791c-2e14-4b2d-805b-1a1db95301cc,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:23.095854 kubelet[2426]: E0213 15:23:23.095808 2426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:23.096096 kubelet[2426]: E0213 15:23:23.095908 2426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:23.096096 kubelet[2426]: E0213 15:23:23.095946 2426 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:23.096096 kubelet[2426]: E0213 15:23:23.096021 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gw5tb_calico-system(ae81791c-2e14-4b2d-805b-1a1db95301cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gw5tb_calico-system(ae81791c-2e14-4b2d-805b-1a1db95301cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gw5tb" podUID="ae81791c-2e14-4b2d-805b-1a1db95301cc" Feb 13 15:23:23.420980 kubelet[2426]: E0213 15:23:23.420570 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:23.580262 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591-shm.mount: Deactivated successfully. Feb 13 15:23:23.735931 kubelet[2426]: I0213 15:23:23.735288 2426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e" Feb 13 15:23:23.737673 containerd[1944]: time="2025-02-13T15:23:23.737370434Z" level=info msg="StopPodSandbox for \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\"" Feb 13 15:23:23.738240 containerd[1944]: time="2025-02-13T15:23:23.737732282Z" level=info msg="Ensure that sandbox d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e in task-service has been cleanup successfully" Feb 13 15:23:23.741415 containerd[1944]: time="2025-02-13T15:23:23.741048794Z" level=info msg="TearDown network for sandbox \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\" successfully" Feb 13 15:23:23.741415 containerd[1944]: time="2025-02-13T15:23:23.741135614Z" level=info msg="StopPodSandbox for \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\" returns successfully" Feb 13 15:23:23.742632 containerd[1944]: time="2025-02-13T15:23:23.742359938Z" level=info msg="StopPodSandbox for \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\"" Feb 13 15:23:23.742632 containerd[1944]: time="2025-02-13T15:23:23.742542758Z" level=info msg="TearDown network for sandbox \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\" successfully" Feb 13 15:23:23.742632 containerd[1944]: time="2025-02-13T15:23:23.742564922Z" level=info msg="StopPodSandbox for \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\" returns successfully" Feb 13 15:23:23.743565 systemd[1]: run-netns-cni\x2dce7b67d9\x2d5b31\x2de231\x2de012\x2db573d8652fc7.mount: Deactivated successfully. Feb 13 15:23:23.748286 containerd[1944]: time="2025-02-13T15:23:23.748196186Z" level=info msg="StopPodSandbox for \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\"" Feb 13 15:23:23.748432 containerd[1944]: time="2025-02-13T15:23:23.748371398Z" level=info msg="TearDown network for sandbox \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\" successfully" Feb 13 15:23:23.748432 containerd[1944]: time="2025-02-13T15:23:23.748396346Z" level=info msg="StopPodSandbox for \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\" returns successfully" Feb 13 15:23:23.750008 containerd[1944]: time="2025-02-13T15:23:23.749943014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw5tb,Uid:ae81791c-2e14-4b2d-805b-1a1db95301cc,Namespace:calico-system,Attempt:3,}" Feb 13 15:23:23.754795 kubelet[2426]: I0213 15:23:23.754553 2426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591" Feb 13 15:23:23.756733 containerd[1944]: time="2025-02-13T15:23:23.756679334Z" level=info msg="StopPodSandbox for \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\"" Feb 13 15:23:23.757020 containerd[1944]: time="2025-02-13T15:23:23.756996074Z" level=info msg="Ensure that sandbox a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591 in task-service has been cleanup successfully" Feb 13 15:23:23.760373 containerd[1944]: time="2025-02-13T15:23:23.760134830Z" level=info msg="TearDown network for sandbox \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\" successfully" Feb 13 15:23:23.760373 containerd[1944]: time="2025-02-13T15:23:23.760258466Z" level=info msg="StopPodSandbox for \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\" returns successfully" Feb 13 15:23:23.763270 systemd[1]: run-netns-cni\x2db7d257bd\x2d1df9\x2da67c\x2d3f7b\x2d79ba72a9c54c.mount: Deactivated successfully. Feb 13 15:23:23.764181 containerd[1944]: time="2025-02-13T15:23:23.763944326Z" level=info msg="StopPodSandbox for \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\"" Feb 13 15:23:23.764181 containerd[1944]: time="2025-02-13T15:23:23.764115326Z" level=info msg="TearDown network for sandbox \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\" successfully" Feb 13 15:23:23.764181 containerd[1944]: time="2025-02-13T15:23:23.764138534Z" level=info msg="StopPodSandbox for \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\" returns successfully" Feb 13 15:23:23.767545 containerd[1944]: time="2025-02-13T15:23:23.767341514Z" level=info msg="StopPodSandbox for \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\"" Feb 13 15:23:23.767545 containerd[1944]: time="2025-02-13T15:23:23.767515994Z" level=info msg="TearDown network for sandbox \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\" successfully" Feb 13 15:23:23.767545 containerd[1944]: time="2025-02-13T15:23:23.767543762Z" level=info msg="StopPodSandbox for \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\" returns successfully" Feb 13 15:23:23.773007 containerd[1944]: time="2025-02-13T15:23:23.770248574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-8fn8f,Uid:5bc7496e-9dfe-4d8f-91c8-e9eb119f3241,Namespace:default,Attempt:3,}" Feb 13 15:23:23.947408 containerd[1944]: time="2025-02-13T15:23:23.947085171Z" level=error msg="Failed to destroy network for sandbox \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:23.949073 containerd[1944]: time="2025-02-13T15:23:23.948841359Z" level=error msg="encountered an error cleaning up failed sandbox \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:23.949073 containerd[1944]: time="2025-02-13T15:23:23.948987495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw5tb,Uid:ae81791c-2e14-4b2d-805b-1a1db95301cc,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:23.949329 kubelet[2426]: E0213 15:23:23.949261 2426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:23.949417 kubelet[2426]: E0213 15:23:23.949343 2426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:23.949417 kubelet[2426]: E0213 15:23:23.949383 2426 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:23.950292 kubelet[2426]: E0213 15:23:23.949471 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gw5tb_calico-system(ae81791c-2e14-4b2d-805b-1a1db95301cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gw5tb_calico-system(ae81791c-2e14-4b2d-805b-1a1db95301cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gw5tb" podUID="ae81791c-2e14-4b2d-805b-1a1db95301cc" Feb 13 15:23:23.998338 containerd[1944]: time="2025-02-13T15:23:23.997100499Z" level=error msg="Failed to destroy network for sandbox \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:24.000653 containerd[1944]: time="2025-02-13T15:23:24.000493475Z" level=error msg="encountered an error cleaning up failed sandbox \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:24.000653 containerd[1944]: time="2025-02-13T15:23:24.000606023Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-8fn8f,Uid:5bc7496e-9dfe-4d8f-91c8-e9eb119f3241,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:24.000983 kubelet[2426]: E0213 15:23:24.000925 2426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:24.001093 kubelet[2426]: E0213 15:23:24.001002 2426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-8fn8f" Feb 13 15:23:24.001093 kubelet[2426]: E0213 15:23:24.001035 2426 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-8fn8f" Feb 13 15:23:24.001209 kubelet[2426]: E0213 15:23:24.001097 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-8fn8f_default(5bc7496e-9dfe-4d8f-91c8-e9eb119f3241)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-8fn8f_default(5bc7496e-9dfe-4d8f-91c8-e9eb119f3241)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-8fn8f" podUID="5bc7496e-9dfe-4d8f-91c8-e9eb119f3241" Feb 13 15:23:24.421895 kubelet[2426]: E0213 15:23:24.421422 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:24.579366 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8-shm.mount: Deactivated successfully. Feb 13 15:23:24.762717 kubelet[2426]: I0213 15:23:24.762595 2426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557" Feb 13 15:23:24.766378 containerd[1944]: time="2025-02-13T15:23:24.766030611Z" level=info msg="StopPodSandbox for \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\"" Feb 13 15:23:24.769926 containerd[1944]: time="2025-02-13T15:23:24.766493775Z" level=info msg="Ensure that sandbox 290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557 in task-service has been cleanup successfully" Feb 13 15:23:24.771457 containerd[1944]: time="2025-02-13T15:23:24.771228603Z" level=info msg="TearDown network for sandbox \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\" successfully" Feb 13 15:23:24.772238 systemd[1]: run-netns-cni\x2d89d07603\x2df10c\x2d6c6d\x2dad2d\x2d9c4372e9dc56.mount: Deactivated successfully. Feb 13 15:23:24.774946 containerd[1944]: time="2025-02-13T15:23:24.774520527Z" level=info msg="StopPodSandbox for \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\" returns successfully" Feb 13 15:23:24.775182 kubelet[2426]: I0213 15:23:24.774607 2426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8" Feb 13 15:23:24.777925 containerd[1944]: time="2025-02-13T15:23:24.777836655Z" level=info msg="StopPodSandbox for \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\"" Feb 13 15:23:24.778845 containerd[1944]: time="2025-02-13T15:23:24.778712391Z" level=info msg="Ensure that sandbox c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8 in task-service has been cleanup successfully" Feb 13 15:23:24.780060 containerd[1944]: time="2025-02-13T15:23:24.777922527Z" level=info msg="StopPodSandbox for \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\"" Feb 13 15:23:24.780060 containerd[1944]: time="2025-02-13T15:23:24.779399631Z" level=info msg="TearDown network for sandbox \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\" successfully" Feb 13 15:23:24.780060 containerd[1944]: time="2025-02-13T15:23:24.779423139Z" level=info msg="StopPodSandbox for \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\" returns successfully" Feb 13 15:23:24.782049 containerd[1944]: time="2025-02-13T15:23:24.781983375Z" level=info msg="TearDown network for sandbox \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\" successfully" Feb 13 15:23:24.782049 containerd[1944]: time="2025-02-13T15:23:24.782037843Z" level=info msg="StopPodSandbox for \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\" returns successfully" Feb 13 15:23:24.784086 systemd[1]: run-netns-cni\x2d1ac96656\x2d43e2\x2d7699\x2df26c\x2d70db008c5862.mount: Deactivated successfully. Feb 13 15:23:24.784846 containerd[1944]: time="2025-02-13T15:23:24.784792743Z" level=info msg="StopPodSandbox for \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\"" Feb 13 15:23:24.787155 containerd[1944]: time="2025-02-13T15:23:24.787111587Z" level=info msg="StopPodSandbox for \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\"" Feb 13 15:23:24.788262 containerd[1944]: time="2025-02-13T15:23:24.788100231Z" level=info msg="TearDown network for sandbox \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\" successfully" Feb 13 15:23:24.788262 containerd[1944]: time="2025-02-13T15:23:24.788159295Z" level=info msg="StopPodSandbox for \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\" returns successfully" Feb 13 15:23:24.789753 containerd[1944]: time="2025-02-13T15:23:24.789164799Z" level=info msg="TearDown network for sandbox \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\" successfully" Feb 13 15:23:24.789753 containerd[1944]: time="2025-02-13T15:23:24.789204147Z" level=info msg="StopPodSandbox for \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\" returns successfully" Feb 13 15:23:24.789753 containerd[1944]: time="2025-02-13T15:23:24.789471267Z" level=info msg="StopPodSandbox for \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\"" Feb 13 15:23:24.789753 containerd[1944]: time="2025-02-13T15:23:24.789612471Z" level=info msg="TearDown network for sandbox \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\" successfully" Feb 13 15:23:24.789753 containerd[1944]: time="2025-02-13T15:23:24.789656295Z" level=info msg="StopPodSandbox for \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\" returns successfully" Feb 13 15:23:24.790489 containerd[1944]: time="2025-02-13T15:23:24.790164927Z" level=info msg="StopPodSandbox for \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\"" Feb 13 15:23:24.790489 containerd[1944]: time="2025-02-13T15:23:24.790317387Z" level=info msg="TearDown network for sandbox \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\" successfully" Feb 13 15:23:24.790489 containerd[1944]: time="2025-02-13T15:23:24.790339899Z" level=info msg="StopPodSandbox for \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\" returns successfully" Feb 13 15:23:24.790489 containerd[1944]: time="2025-02-13T15:23:24.790455303Z" level=info msg="StopPodSandbox for \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\"" Feb 13 15:23:24.791199 containerd[1944]: time="2025-02-13T15:23:24.790576131Z" level=info msg="TearDown network for sandbox \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\" successfully" Feb 13 15:23:24.791199 containerd[1944]: time="2025-02-13T15:23:24.790597743Z" level=info msg="StopPodSandbox for \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\" returns successfully" Feb 13 15:23:24.792073 containerd[1944]: time="2025-02-13T15:23:24.791812251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-8fn8f,Uid:5bc7496e-9dfe-4d8f-91c8-e9eb119f3241,Namespace:default,Attempt:4,}" Feb 13 15:23:24.792904 containerd[1944]: time="2025-02-13T15:23:24.792751347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw5tb,Uid:ae81791c-2e14-4b2d-805b-1a1db95301cc,Namespace:calico-system,Attempt:4,}" Feb 13 15:23:25.009361 containerd[1944]: time="2025-02-13T15:23:25.009228888Z" level=error msg="Failed to destroy network for sandbox \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:25.010060 containerd[1944]: time="2025-02-13T15:23:25.009872580Z" level=error msg="encountered an error cleaning up failed sandbox \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:25.010181 containerd[1944]: time="2025-02-13T15:23:25.010058640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw5tb,Uid:ae81791c-2e14-4b2d-805b-1a1db95301cc,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:25.011037 kubelet[2426]: E0213 15:23:25.010733 2426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:25.011037 kubelet[2426]: E0213 15:23:25.011012 2426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:25.011507 kubelet[2426]: E0213 15:23:25.011053 2426 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:25.011507 kubelet[2426]: E0213 15:23:25.011420 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gw5tb_calico-system(ae81791c-2e14-4b2d-805b-1a1db95301cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gw5tb_calico-system(ae81791c-2e14-4b2d-805b-1a1db95301cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gw5tb" podUID="ae81791c-2e14-4b2d-805b-1a1db95301cc" Feb 13 15:23:25.028590 containerd[1944]: time="2025-02-13T15:23:25.028345020Z" level=error msg="Failed to destroy network for sandbox \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:25.030788 containerd[1944]: time="2025-02-13T15:23:25.030576420Z" level=error msg="encountered an error cleaning up failed sandbox \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:25.031180 containerd[1944]: time="2025-02-13T15:23:25.030752004Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-8fn8f,Uid:5bc7496e-9dfe-4d8f-91c8-e9eb119f3241,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:25.031710 kubelet[2426]: E0213 15:23:25.031643 2426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:25.031847 kubelet[2426]: E0213 15:23:25.031728 2426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-8fn8f" Feb 13 15:23:25.031847 kubelet[2426]: E0213 15:23:25.031764 2426 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-8fn8f" Feb 13 15:23:25.032223 kubelet[2426]: E0213 15:23:25.031837 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-8fn8f_default(5bc7496e-9dfe-4d8f-91c8-e9eb119f3241)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-8fn8f_default(5bc7496e-9dfe-4d8f-91c8-e9eb119f3241)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-8fn8f" podUID="5bc7496e-9dfe-4d8f-91c8-e9eb119f3241" Feb 13 15:23:25.422030 kubelet[2426]: E0213 15:23:25.421928 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:25.579245 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568-shm.mount: Deactivated successfully. Feb 13 15:23:25.580102 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9-shm.mount: Deactivated successfully. Feb 13 15:23:25.788208 kubelet[2426]: I0213 15:23:25.786858 2426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568" Feb 13 15:23:25.788629 containerd[1944]: time="2025-02-13T15:23:25.788410996Z" level=info msg="StopPodSandbox for \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\"" Feb 13 15:23:25.789272 containerd[1944]: time="2025-02-13T15:23:25.788703340Z" level=info msg="Ensure that sandbox ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568 in task-service has been cleanup successfully" Feb 13 15:23:25.795418 systemd[1]: run-netns-cni\x2d7b3dd3d7\x2d3ba1\x2d0276\x2d7988\x2df6bbf9d68139.mount: Deactivated successfully. Feb 13 15:23:25.797004 containerd[1944]: time="2025-02-13T15:23:25.793587952Z" level=info msg="TearDown network for sandbox \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\" successfully" Feb 13 15:23:25.798019 containerd[1944]: time="2025-02-13T15:23:25.797957500Z" level=info msg="StopPodSandbox for \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\" returns successfully" Feb 13 15:23:25.799786 containerd[1944]: time="2025-02-13T15:23:25.799663600Z" level=info msg="StopPodSandbox for \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\"" Feb 13 15:23:25.800358 containerd[1944]: time="2025-02-13T15:23:25.800310880Z" level=info msg="TearDown network for sandbox \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\" successfully" Feb 13 15:23:25.800781 containerd[1944]: time="2025-02-13T15:23:25.800352760Z" level=info msg="StopPodSandbox for \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\" returns successfully" Feb 13 15:23:25.802493 containerd[1944]: time="2025-02-13T15:23:25.802409044Z" level=info msg="StopPodSandbox for \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\"" Feb 13 15:23:25.802732 containerd[1944]: time="2025-02-13T15:23:25.802647628Z" level=info msg="TearDown network for sandbox \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\" successfully" Feb 13 15:23:25.802732 containerd[1944]: time="2025-02-13T15:23:25.802678096Z" level=info msg="StopPodSandbox for \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\" returns successfully" Feb 13 15:23:25.805357 containerd[1944]: time="2025-02-13T15:23:25.805265692Z" level=info msg="StopPodSandbox for \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\"" Feb 13 15:23:25.805477 containerd[1944]: time="2025-02-13T15:23:25.805428640Z" level=info msg="TearDown network for sandbox \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\" successfully" Feb 13 15:23:25.805477 containerd[1944]: time="2025-02-13T15:23:25.805453084Z" level=info msg="StopPodSandbox for \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\" returns successfully" Feb 13 15:23:25.807686 containerd[1944]: time="2025-02-13T15:23:25.807577072Z" level=info msg="StopPodSandbox for \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\"" Feb 13 15:23:25.808395 containerd[1944]: time="2025-02-13T15:23:25.808219804Z" level=info msg="TearDown network for sandbox \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\" successfully" Feb 13 15:23:25.808395 containerd[1944]: time="2025-02-13T15:23:25.808385788Z" level=info msg="StopPodSandbox for \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\" returns successfully" Feb 13 15:23:25.810364 containerd[1944]: time="2025-02-13T15:23:25.810308032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-8fn8f,Uid:5bc7496e-9dfe-4d8f-91c8-e9eb119f3241,Namespace:default,Attempt:5,}" Feb 13 15:23:25.811965 kubelet[2426]: I0213 15:23:25.811899 2426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9" Feb 13 15:23:25.815584 containerd[1944]: time="2025-02-13T15:23:25.815390776Z" level=info msg="StopPodSandbox for \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\"" Feb 13 15:23:25.815729 containerd[1944]: time="2025-02-13T15:23:25.815678920Z" level=info msg="Ensure that sandbox 05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9 in task-service has been cleanup successfully" Feb 13 15:23:25.818263 containerd[1944]: time="2025-02-13T15:23:25.818091580Z" level=info msg="TearDown network for sandbox \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\" successfully" Feb 13 15:23:25.818263 containerd[1944]: time="2025-02-13T15:23:25.818141368Z" level=info msg="StopPodSandbox for \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\" returns successfully" Feb 13 15:23:25.821008 systemd[1]: run-netns-cni\x2d99470af3\x2dc607\x2d7492\x2dbd45\x2dc16d61ce91c3.mount: Deactivated successfully. Feb 13 15:23:25.822104 containerd[1944]: time="2025-02-13T15:23:25.821770888Z" level=info msg="StopPodSandbox for \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\"" Feb 13 15:23:25.823861 containerd[1944]: time="2025-02-13T15:23:25.823811020Z" level=info msg="TearDown network for sandbox \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\" successfully" Feb 13 15:23:25.823861 containerd[1944]: time="2025-02-13T15:23:25.823856824Z" level=info msg="StopPodSandbox for \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\" returns successfully" Feb 13 15:23:25.825856 containerd[1944]: time="2025-02-13T15:23:25.825783448Z" level=info msg="StopPodSandbox for \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\"" Feb 13 15:23:25.826980 containerd[1944]: time="2025-02-13T15:23:25.826816960Z" level=info msg="TearDown network for sandbox \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\" successfully" Feb 13 15:23:25.826980 containerd[1944]: time="2025-02-13T15:23:25.826861624Z" level=info msg="StopPodSandbox for \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\" returns successfully" Feb 13 15:23:25.828601 containerd[1944]: time="2025-02-13T15:23:25.827975104Z" level=info msg="StopPodSandbox for \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\"" Feb 13 15:23:25.828601 containerd[1944]: time="2025-02-13T15:23:25.828530596Z" level=info msg="TearDown network for sandbox \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\" successfully" Feb 13 15:23:25.828601 containerd[1944]: time="2025-02-13T15:23:25.828554140Z" level=info msg="StopPodSandbox for \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\" returns successfully" Feb 13 15:23:25.829732 containerd[1944]: time="2025-02-13T15:23:25.829531612Z" level=info msg="StopPodSandbox for \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\"" Feb 13 15:23:25.829732 containerd[1944]: time="2025-02-13T15:23:25.829728436Z" level=info msg="TearDown network for sandbox \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\" successfully" Feb 13 15:23:25.830013 containerd[1944]: time="2025-02-13T15:23:25.829751908Z" level=info msg="StopPodSandbox for \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\" returns successfully" Feb 13 15:23:25.831491 containerd[1944]: time="2025-02-13T15:23:25.831254332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw5tb,Uid:ae81791c-2e14-4b2d-805b-1a1db95301cc,Namespace:calico-system,Attempt:5,}" Feb 13 15:23:26.064529 containerd[1944]: time="2025-02-13T15:23:26.064446337Z" level=error msg="Failed to destroy network for sandbox \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:26.065357 containerd[1944]: time="2025-02-13T15:23:26.065159917Z" level=error msg="encountered an error cleaning up failed sandbox \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:26.065357 containerd[1944]: time="2025-02-13T15:23:26.065261893Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-8fn8f,Uid:5bc7496e-9dfe-4d8f-91c8-e9eb119f3241,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:26.066313 kubelet[2426]: E0213 15:23:26.065604 2426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:26.066313 kubelet[2426]: E0213 15:23:26.065697 2426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-8fn8f" Feb 13 15:23:26.066313 kubelet[2426]: E0213 15:23:26.065729 2426 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-8fn8f" Feb 13 15:23:26.066495 kubelet[2426]: E0213 15:23:26.065800 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-8fn8f_default(5bc7496e-9dfe-4d8f-91c8-e9eb119f3241)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-8fn8f_default(5bc7496e-9dfe-4d8f-91c8-e9eb119f3241)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-8fn8f" podUID="5bc7496e-9dfe-4d8f-91c8-e9eb119f3241" Feb 13 15:23:26.081944 containerd[1944]: time="2025-02-13T15:23:26.081312854Z" level=error msg="Failed to destroy network for sandbox \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:26.082969 containerd[1944]: time="2025-02-13T15:23:26.082906898Z" level=error msg="encountered an error cleaning up failed sandbox \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:26.083086 containerd[1944]: time="2025-02-13T15:23:26.083023742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw5tb,Uid:ae81791c-2e14-4b2d-805b-1a1db95301cc,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:26.083490 kubelet[2426]: E0213 15:23:26.083312 2426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:26.083490 kubelet[2426]: E0213 15:23:26.083392 2426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:26.083490 kubelet[2426]: E0213 15:23:26.083427 2426 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:26.083792 kubelet[2426]: E0213 15:23:26.083497 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gw5tb_calico-system(ae81791c-2e14-4b2d-805b-1a1db95301cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gw5tb_calico-system(ae81791c-2e14-4b2d-805b-1a1db95301cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gw5tb" podUID="ae81791c-2e14-4b2d-805b-1a1db95301cc" Feb 13 15:23:26.423371 kubelet[2426]: E0213 15:23:26.422994 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:26.579692 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb-shm.mount: Deactivated successfully. Feb 13 15:23:26.819266 kubelet[2426]: I0213 15:23:26.819218 2426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a" Feb 13 15:23:26.820715 containerd[1944]: time="2025-02-13T15:23:26.820669133Z" level=info msg="StopPodSandbox for \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\"" Feb 13 15:23:26.821985 containerd[1944]: time="2025-02-13T15:23:26.821938013Z" level=info msg="Ensure that sandbox 601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a in task-service has been cleanup successfully" Feb 13 15:23:26.825296 containerd[1944]: time="2025-02-13T15:23:26.825243797Z" level=info msg="TearDown network for sandbox \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\" successfully" Feb 13 15:23:26.825473 containerd[1944]: time="2025-02-13T15:23:26.825445073Z" level=info msg="StopPodSandbox for \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\" returns successfully" Feb 13 15:23:26.826389 systemd[1]: run-netns-cni\x2d27740ba7\x2de9a0\x2d9058\x2dc532\x2d5798fbb9827b.mount: Deactivated successfully. Feb 13 15:23:26.829332 containerd[1944]: time="2025-02-13T15:23:26.828862805Z" level=info msg="StopPodSandbox for \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\"" Feb 13 15:23:26.829332 containerd[1944]: time="2025-02-13T15:23:26.829055609Z" level=info msg="TearDown network for sandbox \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\" successfully" Feb 13 15:23:26.829332 containerd[1944]: time="2025-02-13T15:23:26.829079597Z" level=info msg="StopPodSandbox for \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\" returns successfully" Feb 13 15:23:26.831620 containerd[1944]: time="2025-02-13T15:23:26.831033713Z" level=info msg="StopPodSandbox for \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\"" Feb 13 15:23:26.831620 containerd[1944]: time="2025-02-13T15:23:26.831188213Z" level=info msg="TearDown network for sandbox \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\" successfully" Feb 13 15:23:26.831620 containerd[1944]: time="2025-02-13T15:23:26.831213881Z" level=info msg="StopPodSandbox for \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\" returns successfully" Feb 13 15:23:26.832339 containerd[1944]: time="2025-02-13T15:23:26.832299485Z" level=info msg="StopPodSandbox for \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\"" Feb 13 15:23:26.832807 containerd[1944]: time="2025-02-13T15:23:26.832775837Z" level=info msg="TearDown network for sandbox \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\" successfully" Feb 13 15:23:26.833132 containerd[1944]: time="2025-02-13T15:23:26.833103749Z" level=info msg="StopPodSandbox for \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\" returns successfully" Feb 13 15:23:26.836158 containerd[1944]: time="2025-02-13T15:23:26.835698305Z" level=info msg="StopPodSandbox for \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\"" Feb 13 15:23:26.836158 containerd[1944]: time="2025-02-13T15:23:26.835847921Z" level=info msg="TearDown network for sandbox \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\" successfully" Feb 13 15:23:26.836158 containerd[1944]: time="2025-02-13T15:23:26.835868981Z" level=info msg="StopPodSandbox for \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\" returns successfully" Feb 13 15:23:26.837903 containerd[1944]: time="2025-02-13T15:23:26.837515213Z" level=info msg="StopPodSandbox for \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\"" Feb 13 15:23:26.837903 containerd[1944]: time="2025-02-13T15:23:26.837699029Z" level=info msg="TearDown network for sandbox \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\" successfully" Feb 13 15:23:26.837903 containerd[1944]: time="2025-02-13T15:23:26.837722657Z" level=info msg="StopPodSandbox for \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\" returns successfully" Feb 13 15:23:26.838898 kubelet[2426]: I0213 15:23:26.838400 2426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb" Feb 13 15:23:26.839745 containerd[1944]: time="2025-02-13T15:23:26.839428025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw5tb,Uid:ae81791c-2e14-4b2d-805b-1a1db95301cc,Namespace:calico-system,Attempt:6,}" Feb 13 15:23:26.841783 containerd[1944]: time="2025-02-13T15:23:26.841719977Z" level=info msg="StopPodSandbox for \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\"" Feb 13 15:23:26.843296 containerd[1944]: time="2025-02-13T15:23:26.842307437Z" level=info msg="Ensure that sandbox 2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb in task-service has been cleanup successfully" Feb 13 15:23:26.843296 containerd[1944]: time="2025-02-13T15:23:26.842595557Z" level=info msg="TearDown network for sandbox \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\" successfully" Feb 13 15:23:26.843296 containerd[1944]: time="2025-02-13T15:23:26.842624297Z" level=info msg="StopPodSandbox for \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\" returns successfully" Feb 13 15:23:26.846750 systemd[1]: run-netns-cni\x2d66ce974e\x2d2b8e\x2d0a66\x2d96b9\x2d46c67b2ceb57.mount: Deactivated successfully. Feb 13 15:23:26.862951 containerd[1944]: time="2025-02-13T15:23:26.861587429Z" level=info msg="StopPodSandbox for \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\"" Feb 13 15:23:26.862951 containerd[1944]: time="2025-02-13T15:23:26.862162637Z" level=info msg="TearDown network for sandbox \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\" successfully" Feb 13 15:23:26.862951 containerd[1944]: time="2025-02-13T15:23:26.862191605Z" level=info msg="StopPodSandbox for \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\" returns successfully" Feb 13 15:23:26.864242 containerd[1944]: time="2025-02-13T15:23:26.864186701Z" level=info msg="StopPodSandbox for \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\"" Feb 13 15:23:26.864418 containerd[1944]: time="2025-02-13T15:23:26.864375773Z" level=info msg="TearDown network for sandbox \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\" successfully" Feb 13 15:23:26.865201 containerd[1944]: time="2025-02-13T15:23:26.864412253Z" level=info msg="StopPodSandbox for \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\" returns successfully" Feb 13 15:23:26.874272 containerd[1944]: time="2025-02-13T15:23:26.873120305Z" level=info msg="StopPodSandbox for \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\"" Feb 13 15:23:26.874272 containerd[1944]: time="2025-02-13T15:23:26.873295253Z" level=info msg="TearDown network for sandbox \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\" successfully" Feb 13 15:23:26.874272 containerd[1944]: time="2025-02-13T15:23:26.873317681Z" level=info msg="StopPodSandbox for \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\" returns successfully" Feb 13 15:23:26.879908 containerd[1944]: time="2025-02-13T15:23:26.876862541Z" level=info msg="StopPodSandbox for \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\"" Feb 13 15:23:26.879908 containerd[1944]: time="2025-02-13T15:23:26.877198841Z" level=info msg="TearDown network for sandbox \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\" successfully" Feb 13 15:23:26.879908 containerd[1944]: time="2025-02-13T15:23:26.877247309Z" level=info msg="StopPodSandbox for \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\" returns successfully" Feb 13 15:23:26.883394 containerd[1944]: time="2025-02-13T15:23:26.883015073Z" level=info msg="StopPodSandbox for \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\"" Feb 13 15:23:26.883573 containerd[1944]: time="2025-02-13T15:23:26.883524462Z" level=info msg="TearDown network for sandbox \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\" successfully" Feb 13 15:23:26.883657 containerd[1944]: time="2025-02-13T15:23:26.883565082Z" level=info msg="StopPodSandbox for \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\" returns successfully" Feb 13 15:23:26.886647 containerd[1944]: time="2025-02-13T15:23:26.886198422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-8fn8f,Uid:5bc7496e-9dfe-4d8f-91c8-e9eb119f3241,Namespace:default,Attempt:6,}" Feb 13 15:23:27.075497 containerd[1944]: time="2025-02-13T15:23:27.075360878Z" level=error msg="Failed to destroy network for sandbox \"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:27.080401 containerd[1944]: time="2025-02-13T15:23:27.080327138Z" level=error msg="encountered an error cleaning up failed sandbox \"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:27.081038 containerd[1944]: time="2025-02-13T15:23:27.080997218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw5tb,Uid:ae81791c-2e14-4b2d-805b-1a1db95301cc,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:27.082352 containerd[1944]: time="2025-02-13T15:23:27.082296854Z" level=error msg="Failed to destroy network for sandbox \"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:27.082507 kubelet[2426]: E0213 15:23:27.082320 2426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:27.082507 kubelet[2426]: E0213 15:23:27.082390 2426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:27.082507 kubelet[2426]: E0213 15:23:27.082423 2426 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:27.082674 kubelet[2426]: E0213 15:23:27.082502 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gw5tb_calico-system(ae81791c-2e14-4b2d-805b-1a1db95301cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gw5tb_calico-system(ae81791c-2e14-4b2d-805b-1a1db95301cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gw5tb" podUID="ae81791c-2e14-4b2d-805b-1a1db95301cc" Feb 13 15:23:27.084443 containerd[1944]: time="2025-02-13T15:23:27.083487986Z" level=error msg="encountered an error cleaning up failed sandbox \"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:27.084443 containerd[1944]: time="2025-02-13T15:23:27.083613458Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-8fn8f,Uid:5bc7496e-9dfe-4d8f-91c8-e9eb119f3241,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:27.084742 kubelet[2426]: E0213 15:23:27.084627 2426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:27.084808 kubelet[2426]: E0213 15:23:27.084753 2426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-8fn8f" Feb 13 15:23:27.084808 kubelet[2426]: E0213 15:23:27.084787 2426 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-8fn8f" Feb 13 15:23:27.084981 kubelet[2426]: E0213 15:23:27.084869 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-8fn8f_default(5bc7496e-9dfe-4d8f-91c8-e9eb119f3241)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-8fn8f_default(5bc7496e-9dfe-4d8f-91c8-e9eb119f3241)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-8fn8f" podUID="5bc7496e-9dfe-4d8f-91c8-e9eb119f3241" Feb 13 15:23:27.403239 kubelet[2426]: E0213 15:23:27.403095 2426 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:27.423340 kubelet[2426]: E0213 15:23:27.423273 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:27.580870 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31-shm.mount: Deactivated successfully. Feb 13 15:23:27.624961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2992765249.mount: Deactivated successfully. Feb 13 15:23:27.695262 containerd[1944]: time="2025-02-13T15:23:27.694276422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:27.696686 containerd[1944]: time="2025-02-13T15:23:27.696596214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 15:23:27.699059 containerd[1944]: time="2025-02-13T15:23:27.698988870Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:27.703535 containerd[1944]: time="2025-02-13T15:23:27.703467474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:27.705217 containerd[1944]: time="2025-02-13T15:23:27.705036498Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.990735923s" Feb 13 15:23:27.705217 containerd[1944]: time="2025-02-13T15:23:27.705086802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 15:23:27.729331 containerd[1944]: time="2025-02-13T15:23:27.728113866Z" level=info msg="CreateContainer within sandbox \"4ef5350f30e7912961341d753e4511f0cb3cca0f95cd2de28909b26e26c83866\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:23:27.760610 containerd[1944]: time="2025-02-13T15:23:27.760555902Z" level=info msg="CreateContainer within sandbox \"4ef5350f30e7912961341d753e4511f0cb3cca0f95cd2de28909b26e26c83866\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"54e57aedd4d360b37c90de3f8fa12cd518796b96a4a85a9000278509a9c4d65d\"" Feb 13 15:23:27.761556 containerd[1944]: time="2025-02-13T15:23:27.761481882Z" level=info msg="StartContainer for \"54e57aedd4d360b37c90de3f8fa12cd518796b96a4a85a9000278509a9c4d65d\"" Feb 13 15:23:27.805197 systemd[1]: Started cri-containerd-54e57aedd4d360b37c90de3f8fa12cd518796b96a4a85a9000278509a9c4d65d.scope - libcontainer container 54e57aedd4d360b37c90de3f8fa12cd518796b96a4a85a9000278509a9c4d65d. Feb 13 15:23:27.854415 kubelet[2426]: I0213 15:23:27.853526 2426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34" Feb 13 15:23:27.859905 containerd[1944]: time="2025-02-13T15:23:27.857685294Z" level=info msg="StopPodSandbox for \"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\"" Feb 13 15:23:27.861818 containerd[1944]: time="2025-02-13T15:23:27.861424374Z" level=info msg="Ensure that sandbox 63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34 in task-service has been cleanup successfully" Feb 13 15:23:27.862729 containerd[1944]: time="2025-02-13T15:23:27.862683066Z" level=info msg="TearDown network for sandbox \"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\" successfully" Feb 13 15:23:27.863086 containerd[1944]: time="2025-02-13T15:23:27.863048598Z" level=info msg="StopPodSandbox for \"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\" returns successfully" Feb 13 15:23:27.864302 containerd[1944]: time="2025-02-13T15:23:27.863978418Z" level=info msg="StopPodSandbox for \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\"" Feb 13 15:23:27.864302 containerd[1944]: time="2025-02-13T15:23:27.864141714Z" level=info msg="TearDown network for sandbox \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\" successfully" Feb 13 15:23:27.864302 containerd[1944]: time="2025-02-13T15:23:27.864164658Z" level=info msg="StopPodSandbox for \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\" returns successfully" Feb 13 15:23:27.865769 containerd[1944]: time="2025-02-13T15:23:27.865385118Z" level=info msg="StopPodSandbox for \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\"" Feb 13 15:23:27.865769 containerd[1944]: time="2025-02-13T15:23:27.865577874Z" level=info msg="TearDown network for sandbox \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\" successfully" Feb 13 15:23:27.865769 containerd[1944]: time="2025-02-13T15:23:27.865599918Z" level=info msg="StopPodSandbox for \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\" returns successfully" Feb 13 15:23:27.867541 containerd[1944]: time="2025-02-13T15:23:27.866812518Z" level=info msg="StopPodSandbox for \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\"" Feb 13 15:23:27.869132 containerd[1944]: time="2025-02-13T15:23:27.868538250Z" level=info msg="TearDown network for sandbox \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\" successfully" Feb 13 15:23:27.869132 containerd[1944]: time="2025-02-13T15:23:27.868693050Z" level=info msg="StopPodSandbox for \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\" returns successfully" Feb 13 15:23:27.872185 containerd[1944]: time="2025-02-13T15:23:27.870508590Z" level=info msg="StopPodSandbox for \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\"" Feb 13 15:23:27.872185 containerd[1944]: time="2025-02-13T15:23:27.870777774Z" level=info msg="TearDown network for sandbox \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\" successfully" Feb 13 15:23:27.872185 containerd[1944]: time="2025-02-13T15:23:27.870804834Z" level=info msg="StopPodSandbox for \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\" returns successfully" Feb 13 15:23:27.872471 kubelet[2426]: I0213 15:23:27.871120 2426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31" Feb 13 15:23:27.872984 containerd[1944]: time="2025-02-13T15:23:27.872930958Z" level=info msg="StopPodSandbox for \"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\"" Feb 13 15:23:27.874417 containerd[1944]: time="2025-02-13T15:23:27.873267042Z" level=info msg="StopPodSandbox for \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\"" Feb 13 15:23:27.874572 containerd[1944]: time="2025-02-13T15:23:27.874528830Z" level=info msg="TearDown network for sandbox \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\" successfully" Feb 13 15:23:27.874572 containerd[1944]: time="2025-02-13T15:23:27.874551618Z" level=info msg="StopPodSandbox for \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\" returns successfully" Feb 13 15:23:27.875454 containerd[1944]: time="2025-02-13T15:23:27.875403090Z" level=info msg="StopPodSandbox for \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\"" Feb 13 15:23:27.875581 containerd[1944]: time="2025-02-13T15:23:27.875543298Z" level=info msg="TearDown network for sandbox \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\" successfully" Feb 13 15:23:27.875663 containerd[1944]: time="2025-02-13T15:23:27.875574582Z" level=info msg="StopPodSandbox for \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\" returns successfully" Feb 13 15:23:27.876220 containerd[1944]: time="2025-02-13T15:23:27.875948454Z" level=info msg="Ensure that sandbox e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31 in task-service has been cleanup successfully" Feb 13 15:23:27.876524 containerd[1944]: time="2025-02-13T15:23:27.876388938Z" level=info msg="TearDown network for sandbox \"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\" successfully" Feb 13 15:23:27.876770 containerd[1944]: time="2025-02-13T15:23:27.876475254Z" level=info msg="StopPodSandbox for \"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\" returns successfully" Feb 13 15:23:27.879219 containerd[1944]: time="2025-02-13T15:23:27.877807134Z" level=info msg="StopPodSandbox for \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\"" Feb 13 15:23:27.879219 containerd[1944]: time="2025-02-13T15:23:27.878253570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-8fn8f,Uid:5bc7496e-9dfe-4d8f-91c8-e9eb119f3241,Namespace:default,Attempt:7,}" Feb 13 15:23:27.879219 containerd[1944]: time="2025-02-13T15:23:27.878443626Z" level=info msg="TearDown network for sandbox \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\" successfully" Feb 13 15:23:27.879219 containerd[1944]: time="2025-02-13T15:23:27.878590830Z" level=info msg="StopPodSandbox for \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\" returns successfully" Feb 13 15:23:27.880206 containerd[1944]: time="2025-02-13T15:23:27.880149918Z" level=info msg="StopPodSandbox for \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\"" Feb 13 15:23:27.880355 containerd[1944]: time="2025-02-13T15:23:27.880316646Z" level=info msg="TearDown network for sandbox \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\" successfully" Feb 13 15:23:27.880410 containerd[1944]: time="2025-02-13T15:23:27.880347870Z" level=info msg="StopPodSandbox for \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\" returns successfully" Feb 13 15:23:27.881273 containerd[1944]: time="2025-02-13T15:23:27.881219958Z" level=info msg="StopPodSandbox for \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\"" Feb 13 15:23:27.881718 containerd[1944]: time="2025-02-13T15:23:27.881669922Z" level=info msg="TearDown network for sandbox \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\" successfully" Feb 13 15:23:27.881802 containerd[1944]: time="2025-02-13T15:23:27.881711730Z" level=info msg="StopPodSandbox for \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\" returns successfully" Feb 13 15:23:27.882495 containerd[1944]: time="2025-02-13T15:23:27.882436158Z" level=info msg="StopPodSandbox for \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\"" Feb 13 15:23:27.883914 containerd[1944]: time="2025-02-13T15:23:27.883844094Z" level=info msg="TearDown network for sandbox \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\" successfully" Feb 13 15:23:27.884125 containerd[1944]: time="2025-02-13T15:23:27.884081922Z" level=info msg="StopPodSandbox for \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\" returns successfully" Feb 13 15:23:27.884809 containerd[1944]: time="2025-02-13T15:23:27.884757606Z" level=info msg="StopPodSandbox for \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\"" Feb 13 15:23:27.885002 containerd[1944]: time="2025-02-13T15:23:27.884950038Z" level=info msg="TearDown network for sandbox \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\" successfully" Feb 13 15:23:27.885002 containerd[1944]: time="2025-02-13T15:23:27.884983146Z" level=info msg="StopPodSandbox for \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\" returns successfully" Feb 13 15:23:27.886325 containerd[1944]: time="2025-02-13T15:23:27.886258614Z" level=info msg="StopPodSandbox for \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\"" Feb 13 15:23:27.887255 containerd[1944]: time="2025-02-13T15:23:27.887099634Z" level=info msg="TearDown network for sandbox \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\" successfully" Feb 13 15:23:27.887526 containerd[1944]: time="2025-02-13T15:23:27.887437722Z" level=info msg="StopPodSandbox for \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\" returns successfully" Feb 13 15:23:27.889365 containerd[1944]: time="2025-02-13T15:23:27.888479274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw5tb,Uid:ae81791c-2e14-4b2d-805b-1a1db95301cc,Namespace:calico-system,Attempt:7,}" Feb 13 15:23:27.901535 containerd[1944]: time="2025-02-13T15:23:27.901467367Z" level=info msg="StartContainer for \"54e57aedd4d360b37c90de3f8fa12cd518796b96a4a85a9000278509a9c4d65d\" returns successfully" Feb 13 15:23:28.052148 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:23:28.052322 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:23:28.093223 containerd[1944]: time="2025-02-13T15:23:28.093150268Z" level=error msg="Failed to destroy network for sandbox \"d1c9ad9216c5b99114515489ad4a9ed9236498f84dcf6e30ea14141a7dd714b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:28.094543 containerd[1944]: time="2025-02-13T15:23:28.094460248Z" level=error msg="encountered an error cleaning up failed sandbox \"d1c9ad9216c5b99114515489ad4a9ed9236498f84dcf6e30ea14141a7dd714b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:28.094818 containerd[1944]: time="2025-02-13T15:23:28.094775488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-8fn8f,Uid:5bc7496e-9dfe-4d8f-91c8-e9eb119f3241,Namespace:default,Attempt:7,} failed, error" error="failed to setup network for sandbox \"d1c9ad9216c5b99114515489ad4a9ed9236498f84dcf6e30ea14141a7dd714b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:28.096077 kubelet[2426]: E0213 15:23:28.095300 2426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c9ad9216c5b99114515489ad4a9ed9236498f84dcf6e30ea14141a7dd714b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:28.096077 kubelet[2426]: E0213 15:23:28.095375 2426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c9ad9216c5b99114515489ad4a9ed9236498f84dcf6e30ea14141a7dd714b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-8fn8f" Feb 13 15:23:28.096077 kubelet[2426]: E0213 15:23:28.095420 2426 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c9ad9216c5b99114515489ad4a9ed9236498f84dcf6e30ea14141a7dd714b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-8fn8f" Feb 13 15:23:28.096450 kubelet[2426]: E0213 15:23:28.095480 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-8fn8f_default(5bc7496e-9dfe-4d8f-91c8-e9eb119f3241)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-8fn8f_default(5bc7496e-9dfe-4d8f-91c8-e9eb119f3241)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1c9ad9216c5b99114515489ad4a9ed9236498f84dcf6e30ea14141a7dd714b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-8fn8f" podUID="5bc7496e-9dfe-4d8f-91c8-e9eb119f3241" Feb 13 15:23:28.122762 containerd[1944]: time="2025-02-13T15:23:28.122682340Z" level=error msg="Failed to destroy network for sandbox \"435893129d9cf410050ff104060c6dda0b96db48e590fcf1527c64ac80722735\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:28.123344 containerd[1944]: time="2025-02-13T15:23:28.123286672Z" level=error msg="encountered an error cleaning up failed sandbox \"435893129d9cf410050ff104060c6dda0b96db48e590fcf1527c64ac80722735\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:28.123452 containerd[1944]: time="2025-02-13T15:23:28.123392128Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw5tb,Uid:ae81791c-2e14-4b2d-805b-1a1db95301cc,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"435893129d9cf410050ff104060c6dda0b96db48e590fcf1527c64ac80722735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:28.124384 kubelet[2426]: E0213 15:23:28.123796 2426 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"435893129d9cf410050ff104060c6dda0b96db48e590fcf1527c64ac80722735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:23:28.124384 kubelet[2426]: E0213 15:23:28.123869 2426 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"435893129d9cf410050ff104060c6dda0b96db48e590fcf1527c64ac80722735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:28.124384 kubelet[2426]: E0213 15:23:28.123938 2426 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"435893129d9cf410050ff104060c6dda0b96db48e590fcf1527c64ac80722735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gw5tb" Feb 13 15:23:28.124637 kubelet[2426]: E0213 15:23:28.124013 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gw5tb_calico-system(ae81791c-2e14-4b2d-805b-1a1db95301cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gw5tb_calico-system(ae81791c-2e14-4b2d-805b-1a1db95301cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"435893129d9cf410050ff104060c6dda0b96db48e590fcf1527c64ac80722735\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gw5tb" podUID="ae81791c-2e14-4b2d-805b-1a1db95301cc" Feb 13 15:23:28.424304 kubelet[2426]: E0213 15:23:28.424160 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:28.581961 systemd[1]: run-netns-cni\x2d330b2ad5\x2d2243\x2d709f\x2d93e6\x2d52300eed71fd.mount: Deactivated successfully. Feb 13 15:23:28.582562 systemd[1]: run-netns-cni\x2d10e9da89\x2d8b9d\x2d3e6a\x2d3b9a\x2dd3fa1b60397b.mount: Deactivated successfully. Feb 13 15:23:28.898209 kubelet[2426]: I0213 15:23:28.898044 2426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1c9ad9216c5b99114515489ad4a9ed9236498f84dcf6e30ea14141a7dd714b5" Feb 13 15:23:28.900445 containerd[1944]: time="2025-02-13T15:23:28.900310880Z" level=info msg="StopPodSandbox for \"d1c9ad9216c5b99114515489ad4a9ed9236498f84dcf6e30ea14141a7dd714b5\"" Feb 13 15:23:28.903690 containerd[1944]: time="2025-02-13T15:23:28.900739964Z" level=info msg="Ensure that sandbox d1c9ad9216c5b99114515489ad4a9ed9236498f84dcf6e30ea14141a7dd714b5 in task-service has been cleanup successfully" Feb 13 15:23:28.903690 containerd[1944]: time="2025-02-13T15:23:28.902322956Z" level=info msg="TearDown network for sandbox \"d1c9ad9216c5b99114515489ad4a9ed9236498f84dcf6e30ea14141a7dd714b5\" successfully" Feb 13 15:23:28.903690 containerd[1944]: time="2025-02-13T15:23:28.902367200Z" level=info msg="StopPodSandbox for \"d1c9ad9216c5b99114515489ad4a9ed9236498f84dcf6e30ea14141a7dd714b5\" returns successfully" Feb 13 15:23:28.906756 containerd[1944]: time="2025-02-13T15:23:28.906341036Z" level=info msg="StopPodSandbox for \"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\"" Feb 13 15:23:28.906756 containerd[1944]: time="2025-02-13T15:23:28.906520652Z" level=info msg="TearDown network for sandbox \"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\" successfully" Feb 13 15:23:28.906756 containerd[1944]: time="2025-02-13T15:23:28.906546740Z" level=info msg="StopPodSandbox for \"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\" returns successfully" Feb 13 15:23:28.908651 containerd[1944]: time="2025-02-13T15:23:28.907659620Z" level=info msg="StopPodSandbox for \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\"" Feb 13 15:23:28.908651 containerd[1944]: time="2025-02-13T15:23:28.907803668Z" level=info msg="TearDown network for sandbox \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\" successfully" Feb 13 15:23:28.908651 containerd[1944]: time="2025-02-13T15:23:28.907825328Z" level=info msg="StopPodSandbox for \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\" returns successfully" Feb 13 15:23:28.908651 containerd[1944]: time="2025-02-13T15:23:28.909381116Z" level=info msg="StopPodSandbox for \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\"" Feb 13 15:23:28.908651 containerd[1944]: time="2025-02-13T15:23:28.909536036Z" level=info msg="TearDown network for sandbox \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\" successfully" Feb 13 15:23:28.908651 containerd[1944]: time="2025-02-13T15:23:28.909557264Z" level=info msg="StopPodSandbox for \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\" returns successfully" Feb 13 15:23:28.910521 systemd[1]: run-netns-cni\x2d758fd311\x2d4776\x2de3ff\x2dd18d\x2db72ba2452e04.mount: Deactivated successfully. Feb 13 15:23:28.915995 containerd[1944]: time="2025-02-13T15:23:28.915252512Z" level=info msg="StopPodSandbox for \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\"" Feb 13 15:23:28.915995 containerd[1944]: time="2025-02-13T15:23:28.915533216Z" level=info msg="TearDown network for sandbox \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\" successfully" Feb 13 15:23:28.915995 containerd[1944]: time="2025-02-13T15:23:28.915559832Z" level=info msg="StopPodSandbox for \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\" returns successfully" Feb 13 15:23:28.916229 kubelet[2426]: I0213 15:23:28.916126 2426 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="435893129d9cf410050ff104060c6dda0b96db48e590fcf1527c64ac80722735" Feb 13 15:23:28.917821 containerd[1944]: time="2025-02-13T15:23:28.917758568Z" level=info msg="StopPodSandbox for \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\"" Feb 13 15:23:28.918766 containerd[1944]: time="2025-02-13T15:23:28.918703076Z" level=info msg="TearDown network for sandbox \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\" successfully" Feb 13 15:23:28.918766 containerd[1944]: time="2025-02-13T15:23:28.918747188Z" level=info msg="StopPodSandbox for \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\" returns successfully" Feb 13 15:23:28.919001 containerd[1944]: time="2025-02-13T15:23:28.918966080Z" level=info msg="StopPodSandbox for \"435893129d9cf410050ff104060c6dda0b96db48e590fcf1527c64ac80722735\"" Feb 13 15:23:28.919505 containerd[1944]: time="2025-02-13T15:23:28.919454576Z" level=info msg="Ensure that sandbox 435893129d9cf410050ff104060c6dda0b96db48e590fcf1527c64ac80722735 in task-service has been cleanup successfully" Feb 13 15:23:28.920167 containerd[1944]: time="2025-02-13T15:23:28.920117720Z" level=info msg="TearDown network for sandbox \"435893129d9cf410050ff104060c6dda0b96db48e590fcf1527c64ac80722735\" successfully" Feb 13 15:23:28.920167 containerd[1944]: time="2025-02-13T15:23:28.920159828Z" level=info msg="StopPodSandbox for \"435893129d9cf410050ff104060c6dda0b96db48e590fcf1527c64ac80722735\" returns successfully" Feb 13 15:23:28.920688 containerd[1944]: time="2025-02-13T15:23:28.920637896Z" level=info msg="StopPodSandbox for \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\"" Feb 13 15:23:28.922860 containerd[1944]: time="2025-02-13T15:23:28.920948576Z" level=info msg="TearDown network for sandbox \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\" successfully" Feb 13 15:23:28.922860 containerd[1944]: time="2025-02-13T15:23:28.921007652Z" level=info msg="StopPodSandbox for \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\" returns successfully" Feb 13 15:23:28.922860 containerd[1944]: time="2025-02-13T15:23:28.921647252Z" level=info msg="StopPodSandbox for \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\"" Feb 13 15:23:28.922860 containerd[1944]: time="2025-02-13T15:23:28.921847472Z" level=info msg="TearDown network for sandbox \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\" successfully" Feb 13 15:23:28.922860 containerd[1944]: time="2025-02-13T15:23:28.921923924Z" level=info msg="StopPodSandbox for \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\" returns successfully" Feb 13 15:23:28.924903 containerd[1944]: time="2025-02-13T15:23:28.923239640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-8fn8f,Uid:5bc7496e-9dfe-4d8f-91c8-e9eb119f3241,Namespace:default,Attempt:8,}" Feb 13 15:23:28.924903 containerd[1944]: time="2025-02-13T15:23:28.923292260Z" level=info msg="StopPodSandbox for \"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\"" Feb 13 15:23:28.924903 containerd[1944]: time="2025-02-13T15:23:28.923451560Z" level=info msg="TearDown network for sandbox \"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\" successfully" Feb 13 15:23:28.924903 containerd[1944]: time="2025-02-13T15:23:28.923478080Z" level=info msg="StopPodSandbox for \"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\" returns successfully" Feb 13 15:23:28.927103 systemd[1]: run-netns-cni\x2d56e3c689\x2d03be\x2d670f\x2d5db5\x2dd8e787593996.mount: Deactivated successfully. Feb 13 15:23:28.928049 containerd[1944]: time="2025-02-13T15:23:28.927335288Z" level=info msg="StopPodSandbox for \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\"" Feb 13 15:23:28.928049 containerd[1944]: time="2025-02-13T15:23:28.927502340Z" level=info msg="TearDown network for sandbox \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\" successfully" Feb 13 15:23:28.928049 containerd[1944]: time="2025-02-13T15:23:28.927526352Z" level=info msg="StopPodSandbox for \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\" returns successfully" Feb 13 15:23:28.930810 containerd[1944]: time="2025-02-13T15:23:28.930289496Z" level=info msg="StopPodSandbox for \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\"" Feb 13 15:23:28.930810 containerd[1944]: time="2025-02-13T15:23:28.930452552Z" level=info msg="TearDown network for sandbox \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\" successfully" Feb 13 15:23:28.930810 containerd[1944]: time="2025-02-13T15:23:28.930475280Z" level=info msg="StopPodSandbox for \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\" returns successfully" Feb 13 15:23:28.931416 containerd[1944]: time="2025-02-13T15:23:28.931376276Z" level=info msg="StopPodSandbox for \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\"" Feb 13 15:23:28.931761 containerd[1944]: time="2025-02-13T15:23:28.931730528Z" level=info msg="TearDown network for sandbox \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\" successfully" Feb 13 15:23:28.931997 containerd[1944]: time="2025-02-13T15:23:28.931966628Z" level=info msg="StopPodSandbox for \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\" returns successfully" Feb 13 15:23:28.933419 containerd[1944]: time="2025-02-13T15:23:28.933357776Z" level=info msg="StopPodSandbox for \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\"" Feb 13 15:23:28.933562 containerd[1944]: time="2025-02-13T15:23:28.933531860Z" level=info msg="TearDown network for sandbox \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\" successfully" Feb 13 15:23:28.933633 containerd[1944]: time="2025-02-13T15:23:28.933554888Z" level=info msg="StopPodSandbox for \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\" returns successfully" Feb 13 15:23:28.936101 containerd[1944]: time="2025-02-13T15:23:28.935473016Z" level=info msg="StopPodSandbox for \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\"" Feb 13 15:23:28.936101 containerd[1944]: time="2025-02-13T15:23:28.935784500Z" level=info msg="TearDown network for sandbox \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\" successfully" Feb 13 15:23:28.936101 containerd[1944]: time="2025-02-13T15:23:28.935807576Z" level=info msg="StopPodSandbox for \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\" returns successfully" Feb 13 15:23:28.938759 containerd[1944]: time="2025-02-13T15:23:28.937044080Z" level=info msg="StopPodSandbox for \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\"" Feb 13 15:23:28.938759 containerd[1944]: time="2025-02-13T15:23:28.937196408Z" level=info msg="TearDown network for sandbox \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\" successfully" Feb 13 15:23:28.938759 containerd[1944]: time="2025-02-13T15:23:28.937219304Z" level=info msg="StopPodSandbox for \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\" returns successfully" Feb 13 15:23:28.941460 containerd[1944]: time="2025-02-13T15:23:28.941381612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw5tb,Uid:ae81791c-2e14-4b2d-805b-1a1db95301cc,Namespace:calico-system,Attempt:8,}" Feb 13 15:23:29.229847 systemd-networkd[1852]: cali9c049b1905c: Link UP Feb 13 15:23:29.230503 (udev-worker)[3404]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:23:29.234636 systemd-networkd[1852]: cali9c049b1905c: Gained carrier Feb 13 15:23:29.241138 kubelet[2426]: I0213 15:23:29.240505 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-m2bff" podStartSLOduration=5.376901797 podStartE2EDuration="22.239898149s" podCreationTimestamp="2025-02-13 15:23:07 +0000 UTC" firstStartedPulling="2025-02-13 15:23:10.843190526 +0000 UTC m=+6.027101275" lastFinishedPulling="2025-02-13 15:23:27.706186866 +0000 UTC m=+22.890097627" observedRunningTime="2025-02-13 15:23:28.931108616 +0000 UTC m=+24.115019365" watchObservedRunningTime="2025-02-13 15:23:29.239898149 +0000 UTC m=+24.423808946" Feb 13 15:23:29.244193 containerd[1944]: 2025-02-13 15:23:29.041 [INFO][3467] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:23:29.244193 containerd[1944]: 2025-02-13 15:23:29.072 [INFO][3467] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.20.64-k8s-nginx--deployment--8587fbcb89--8fn8f-eth0 nginx-deployment-8587fbcb89- default 5bc7496e-9dfe-4d8f-91c8-e9eb119f3241 1077 0 2025-02-13 15:23:20 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.20.64 nginx-deployment-8587fbcb89-8fn8f eth0 default [] [] [kns.default ksa.default.default] cali9c049b1905c [] []}} ContainerID="47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705" Namespace="default" Pod="nginx-deployment-8587fbcb89-8fn8f" WorkloadEndpoint="172.31.20.64-k8s-nginx--deployment--8587fbcb89--8fn8f-" Feb 13 15:23:29.244193 containerd[1944]: 2025-02-13 15:23:29.072 [INFO][3467] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705" Namespace="default" Pod="nginx-deployment-8587fbcb89-8fn8f" WorkloadEndpoint="172.31.20.64-k8s-nginx--deployment--8587fbcb89--8fn8f-eth0" Feb 13 15:23:29.244193 containerd[1944]: 2025-02-13 15:23:29.142 [INFO][3500] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705" HandleID="k8s-pod-network.47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705" Workload="172.31.20.64-k8s-nginx--deployment--8587fbcb89--8fn8f-eth0" Feb 13 15:23:29.244193 containerd[1944]: 2025-02-13 15:23:29.163 [INFO][3500] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705" HandleID="k8s-pod-network.47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705" Workload="172.31.20.64-k8s-nginx--deployment--8587fbcb89--8fn8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003c9db0), Attrs:map[string]string{"namespace":"default", "node":"172.31.20.64", "pod":"nginx-deployment-8587fbcb89-8fn8f", "timestamp":"2025-02-13 15:23:29.142694357 +0000 UTC"}, Hostname:"172.31.20.64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:23:29.244193 containerd[1944]: 2025-02-13 15:23:29.163 [INFO][3500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:23:29.244193 containerd[1944]: 2025-02-13 15:23:29.163 [INFO][3500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:23:29.244193 containerd[1944]: 2025-02-13 15:23:29.163 [INFO][3500] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.20.64' Feb 13 15:23:29.244193 containerd[1944]: 2025-02-13 15:23:29.171 [INFO][3500] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705" host="172.31.20.64" Feb 13 15:23:29.244193 containerd[1944]: 2025-02-13 15:23:29.178 [INFO][3500] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.20.64" Feb 13 15:23:29.244193 containerd[1944]: 2025-02-13 15:23:29.185 [INFO][3500] ipam/ipam.go 489: Trying affinity for 192.168.20.128/26 host="172.31.20.64" Feb 13 15:23:29.244193 containerd[1944]: 2025-02-13 15:23:29.188 [INFO][3500] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.128/26 host="172.31.20.64" Feb 13 15:23:29.244193 containerd[1944]: 2025-02-13 15:23:29.193 [INFO][3500] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.128/26 host="172.31.20.64" Feb 13 15:23:29.244193 containerd[1944]: 2025-02-13 15:23:29.193 [INFO][3500] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.128/26 handle="k8s-pod-network.47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705" host="172.31.20.64" Feb 13 15:23:29.244193 containerd[1944]: 2025-02-13 15:23:29.196 [INFO][3500] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705 Feb 13 15:23:29.244193 containerd[1944]: 2025-02-13 15:23:29.204 [INFO][3500] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.128/26 handle="k8s-pod-network.47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705" host="172.31.20.64" Feb 13 15:23:29.244193 containerd[1944]: 2025-02-13 15:23:29.214 [INFO][3500] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.129/26] block=192.168.20.128/26 handle="k8s-pod-network.47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705" host="172.31.20.64" Feb 13 15:23:29.244193 containerd[1944]: 2025-02-13 15:23:29.215 [INFO][3500] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.129/26] handle="k8s-pod-network.47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705" host="172.31.20.64" Feb 13 15:23:29.244193 containerd[1944]: 2025-02-13 15:23:29.215 [INFO][3500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:23:29.244193 containerd[1944]: 2025-02-13 15:23:29.215 [INFO][3500] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.129/26] IPv6=[] ContainerID="47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705" HandleID="k8s-pod-network.47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705" Workload="172.31.20.64-k8s-nginx--deployment--8587fbcb89--8fn8f-eth0" Feb 13 15:23:29.245988 containerd[1944]: 2025-02-13 15:23:29.220 [INFO][3467] cni-plugin/k8s.go 386: Populated endpoint ContainerID="47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705" Namespace="default" Pod="nginx-deployment-8587fbcb89-8fn8f" WorkloadEndpoint="172.31.20.64-k8s-nginx--deployment--8587fbcb89--8fn8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.64-k8s-nginx--deployment--8587fbcb89--8fn8f-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"5bc7496e-9dfe-4d8f-91c8-e9eb119f3241", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.64", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-8fn8f", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.20.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali9c049b1905c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:23:29.245988 containerd[1944]: 2025-02-13 15:23:29.220 [INFO][3467] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.129/32] ContainerID="47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705" Namespace="default" Pod="nginx-deployment-8587fbcb89-8fn8f" WorkloadEndpoint="172.31.20.64-k8s-nginx--deployment--8587fbcb89--8fn8f-eth0" Feb 13 15:23:29.245988 containerd[1944]: 2025-02-13 15:23:29.220 [INFO][3467] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c049b1905c ContainerID="47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705" Namespace="default" Pod="nginx-deployment-8587fbcb89-8fn8f" WorkloadEndpoint="172.31.20.64-k8s-nginx--deployment--8587fbcb89--8fn8f-eth0" Feb 13 15:23:29.245988 containerd[1944]: 2025-02-13 15:23:29.230 [INFO][3467] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705" Namespace="default" Pod="nginx-deployment-8587fbcb89-8fn8f" WorkloadEndpoint="172.31.20.64-k8s-nginx--deployment--8587fbcb89--8fn8f-eth0" Feb 13 15:23:29.245988 containerd[1944]: 2025-02-13 15:23:29.230 [INFO][3467] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705" Namespace="default" Pod="nginx-deployment-8587fbcb89-8fn8f" WorkloadEndpoint="172.31.20.64-k8s-nginx--deployment--8587fbcb89--8fn8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.64-k8s-nginx--deployment--8587fbcb89--8fn8f-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"5bc7496e-9dfe-4d8f-91c8-e9eb119f3241", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.64", ContainerID:"47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705", Pod:"nginx-deployment-8587fbcb89-8fn8f", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.20.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali9c049b1905c", MAC:"3a:99:7b:db:a0:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:23:29.245988 containerd[1944]: 2025-02-13 15:23:29.240 [INFO][3467] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705" Namespace="default" Pod="nginx-deployment-8587fbcb89-8fn8f" WorkloadEndpoint="172.31.20.64-k8s-nginx--deployment--8587fbcb89--8fn8f-eth0" Feb 13 15:23:29.289721 containerd[1944]: time="2025-02-13T15:23:29.288746957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:23:29.289721 containerd[1944]: time="2025-02-13T15:23:29.288991133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:23:29.289721 containerd[1944]: time="2025-02-13T15:23:29.289040873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:29.291405 containerd[1944]: time="2025-02-13T15:23:29.290691761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:29.321214 systemd[1]: Started cri-containerd-47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705.scope - libcontainer container 47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705. Feb 13 15:23:29.331806 systemd-networkd[1852]: calia73415d8d4e: Link UP Feb 13 15:23:29.333005 (udev-worker)[3402]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:23:29.335848 systemd-networkd[1852]: calia73415d8d4e: Gained carrier Feb 13 15:23:29.357797 containerd[1944]: 2025-02-13 15:23:29.082 [INFO][3486] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:23:29.357797 containerd[1944]: 2025-02-13 15:23:29.115 [INFO][3486] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.20.64-k8s-csi--node--driver--gw5tb-eth0 csi-node-driver- calico-system ae81791c-2e14-4b2d-805b-1a1db95301cc 987 0 2025-02-13 15:23:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.20.64 csi-node-driver-gw5tb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia73415d8d4e [] []}} ContainerID="10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1" Namespace="calico-system" Pod="csi-node-driver-gw5tb" WorkloadEndpoint="172.31.20.64-k8s-csi--node--driver--gw5tb-" Feb 13 15:23:29.357797 containerd[1944]: 2025-02-13 15:23:29.115 [INFO][3486] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1" Namespace="calico-system" Pod="csi-node-driver-gw5tb" WorkloadEndpoint="172.31.20.64-k8s-csi--node--driver--gw5tb-eth0" Feb 13 15:23:29.357797 containerd[1944]: 2025-02-13 15:23:29.179 [INFO][3507] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1" HandleID="k8s-pod-network.10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1" Workload="172.31.20.64-k8s-csi--node--driver--gw5tb-eth0" Feb 13 15:23:29.357797 containerd[1944]: 2025-02-13 15:23:29.200 [INFO][3507] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1" HandleID="k8s-pod-network.10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1" Workload="172.31.20.64-k8s-csi--node--driver--gw5tb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316b20), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.20.64", "pod":"csi-node-driver-gw5tb", "timestamp":"2025-02-13 15:23:29.179037101 +0000 UTC"}, Hostname:"172.31.20.64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:23:29.357797 containerd[1944]: 2025-02-13 15:23:29.200 [INFO][3507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:23:29.357797 containerd[1944]: 2025-02-13 15:23:29.215 [INFO][3507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:23:29.357797 containerd[1944]: 2025-02-13 15:23:29.215 [INFO][3507] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.20.64' Feb 13 15:23:29.357797 containerd[1944]: 2025-02-13 15:23:29.273 [INFO][3507] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1" host="172.31.20.64" Feb 13 15:23:29.357797 containerd[1944]: 2025-02-13 15:23:29.280 [INFO][3507] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.20.64" Feb 13 15:23:29.357797 containerd[1944]: 2025-02-13 15:23:29.288 [INFO][3507] ipam/ipam.go 489: Trying affinity for 192.168.20.128/26 host="172.31.20.64" Feb 13 15:23:29.357797 containerd[1944]: 2025-02-13 15:23:29.295 [INFO][3507] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.128/26 host="172.31.20.64" Feb 13 15:23:29.357797 containerd[1944]: 2025-02-13 15:23:29.298 [INFO][3507] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.128/26 host="172.31.20.64" Feb 13 15:23:29.357797 containerd[1944]: 2025-02-13 15:23:29.298 [INFO][3507] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.128/26 handle="k8s-pod-network.10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1" host="172.31.20.64" Feb 13 15:23:29.357797 containerd[1944]: 2025-02-13 15:23:29.301 [INFO][3507] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1 Feb 13 15:23:29.357797 containerd[1944]: 2025-02-13 15:23:29.310 [INFO][3507] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.128/26 handle="k8s-pod-network.10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1" host="172.31.20.64" Feb 13 15:23:29.357797 containerd[1944]: 2025-02-13 15:23:29.319 [INFO][3507] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.130/26] block=192.168.20.128/26 handle="k8s-pod-network.10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1" host="172.31.20.64" Feb 13 15:23:29.357797 containerd[1944]: 2025-02-13 15:23:29.320 [INFO][3507] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.130/26] handle="k8s-pod-network.10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1" host="172.31.20.64" Feb 13 15:23:29.357797 containerd[1944]: 2025-02-13 15:23:29.320 [INFO][3507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:23:29.357797 containerd[1944]: 2025-02-13 15:23:29.320 [INFO][3507] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.130/26] IPv6=[] ContainerID="10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1" HandleID="k8s-pod-network.10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1" Workload="172.31.20.64-k8s-csi--node--driver--gw5tb-eth0" Feb 13 15:23:29.358898 containerd[1944]: 2025-02-13 15:23:29.324 [INFO][3486] cni-plugin/k8s.go 386: Populated endpoint ContainerID="10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1" Namespace="calico-system" Pod="csi-node-driver-gw5tb" WorkloadEndpoint="172.31.20.64-k8s-csi--node--driver--gw5tb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.64-k8s-csi--node--driver--gw5tb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae81791c-2e14-4b2d-805b-1a1db95301cc", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 23, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.64", ContainerID:"", Pod:"csi-node-driver-gw5tb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia73415d8d4e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:23:29.358898 containerd[1944]: 2025-02-13 15:23:29.325 [INFO][3486] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.130/32] ContainerID="10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1" Namespace="calico-system" Pod="csi-node-driver-gw5tb" WorkloadEndpoint="172.31.20.64-k8s-csi--node--driver--gw5tb-eth0" Feb 13 15:23:29.358898 containerd[1944]: 2025-02-13 15:23:29.325 [INFO][3486] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia73415d8d4e ContainerID="10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1" Namespace="calico-system" Pod="csi-node-driver-gw5tb" WorkloadEndpoint="172.31.20.64-k8s-csi--node--driver--gw5tb-eth0" Feb 13 15:23:29.358898 containerd[1944]: 2025-02-13 15:23:29.333 [INFO][3486] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1" Namespace="calico-system" Pod="csi-node-driver-gw5tb" WorkloadEndpoint="172.31.20.64-k8s-csi--node--driver--gw5tb-eth0" Feb 13 15:23:29.358898 containerd[1944]: 2025-02-13 15:23:29.334 [INFO][3486] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1" Namespace="calico-system" Pod="csi-node-driver-gw5tb" WorkloadEndpoint="172.31.20.64-k8s-csi--node--driver--gw5tb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.64-k8s-csi--node--driver--gw5tb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae81791c-2e14-4b2d-805b-1a1db95301cc", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 23, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.64", ContainerID:"10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1", Pod:"csi-node-driver-gw5tb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia73415d8d4e", MAC:"62:92:53:9e:8e:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:23:29.358898 containerd[1944]: 2025-02-13 15:23:29.353 [INFO][3486] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1" Namespace="calico-system" Pod="csi-node-driver-gw5tb" WorkloadEndpoint="172.31.20.64-k8s-csi--node--driver--gw5tb-eth0" Feb 13 15:23:29.410198 containerd[1944]: time="2025-02-13T15:23:29.410071698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-8fn8f,Uid:5bc7496e-9dfe-4d8f-91c8-e9eb119f3241,Namespace:default,Attempt:8,} returns sandbox id \"47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705\"" Feb 13 15:23:29.413951 containerd[1944]: time="2025-02-13T15:23:29.413538078Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 15:23:29.414495 containerd[1944]: time="2025-02-13T15:23:29.414376602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:23:29.414947 containerd[1944]: time="2025-02-13T15:23:29.414531810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:23:29.414947 containerd[1944]: time="2025-02-13T15:23:29.414563106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:29.414947 containerd[1944]: time="2025-02-13T15:23:29.414797262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:29.425395 kubelet[2426]: E0213 15:23:29.425325 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:29.447209 systemd[1]: Started cri-containerd-10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1.scope - libcontainer container 10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1. Feb 13 15:23:29.549447 containerd[1944]: time="2025-02-13T15:23:29.549136831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw5tb,Uid:ae81791c-2e14-4b2d-805b-1a1db95301cc,Namespace:calico-system,Attempt:8,} returns sandbox id \"10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1\"" Feb 13 15:23:29.972018 kernel: bpftool[3725]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:23:30.265104 systemd-networkd[1852]: cali9c049b1905c: Gained IPv6LL Feb 13 15:23:30.304716 systemd-networkd[1852]: vxlan.calico: Link UP Feb 13 15:23:30.304735 systemd-networkd[1852]: vxlan.calico: Gained carrier Feb 13 15:23:30.394388 systemd-networkd[1852]: calia73415d8d4e: Gained IPv6LL Feb 13 15:23:30.425938 kubelet[2426]: E0213 15:23:30.425860 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:31.426776 kubelet[2426]: E0213 15:23:31.426729 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:31.801146 systemd-networkd[1852]: vxlan.calico: Gained IPv6LL Feb 13 15:23:32.428377 kubelet[2426]: E0213 15:23:32.428292 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:32.917330 update_engine[1926]: I20250213 15:23:32.917245 1926 update_attempter.cc:509] Updating boot flags... Feb 13 15:23:33.032003 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3400) Feb 13 15:23:33.229315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3313632167.mount: Deactivated successfully. Feb 13 15:23:33.417405 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3400) Feb 13 15:23:33.428544 kubelet[2426]: E0213 15:23:33.428462 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:34.268108 ntpd[1920]: Listen normally on 8 vxlan.calico 192.168.20.128:123 Feb 13 15:23:34.269047 ntpd[1920]: 13 Feb 15:23:34 ntpd[1920]: Listen normally on 8 vxlan.calico 192.168.20.128:123 Feb 13 15:23:34.269047 ntpd[1920]: 13 Feb 15:23:34 ntpd[1920]: Listen normally on 9 cali9c049b1905c [fe80::ecee:eeff:feee:eeee%3]:123 Feb 13 15:23:34.269047 ntpd[1920]: 13 Feb 15:23:34 ntpd[1920]: Listen normally on 10 calia73415d8d4e [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 15:23:34.269047 ntpd[1920]: 13 Feb 15:23:34 ntpd[1920]: Listen normally on 11 vxlan.calico [fe80::6463:1ff:fef1:4d2c%5]:123 Feb 13 15:23:34.268744 ntpd[1920]: Listen normally on 9 cali9c049b1905c [fe80::ecee:eeff:feee:eeee%3]:123 Feb 13 15:23:34.268826 ntpd[1920]: Listen normally on 10 calia73415d8d4e [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 15:23:34.268932 ntpd[1920]: Listen normally on 11 vxlan.calico [fe80::6463:1ff:fef1:4d2c%5]:123 Feb 13 15:23:34.429520 kubelet[2426]: E0213 15:23:34.429457 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:35.006665 containerd[1944]: time="2025-02-13T15:23:35.006605542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:35.009348 containerd[1944]: time="2025-02-13T15:23:35.009265198Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086" Feb 13 15:23:35.010999 containerd[1944]: time="2025-02-13T15:23:35.010929814Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:35.016920 containerd[1944]: time="2025-02-13T15:23:35.016667614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:35.019755 containerd[1944]: time="2025-02-13T15:23:35.019547062Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 5.60594476s" Feb 13 15:23:35.019755 containerd[1944]: time="2025-02-13T15:23:35.019605754Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 15:23:35.025657 containerd[1944]: time="2025-02-13T15:23:35.025264630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:23:35.030603 containerd[1944]: time="2025-02-13T15:23:35.030536194Z" level=info msg="CreateContainer within sandbox \"47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 15:23:35.063509 containerd[1944]: time="2025-02-13T15:23:35.063434254Z" level=info msg="CreateContainer within sandbox \"47ea33023a0732bf508ada7f38de716e06d8c07f151242b4dd94ebbdc10b7705\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"e932f7fa73603e36b3c647547dbba548efd6f7c556ffbe4129a2d21cab11f961\"" Feb 13 15:23:35.064411 containerd[1944]: time="2025-02-13T15:23:35.064324270Z" level=info msg="StartContainer for \"e932f7fa73603e36b3c647547dbba548efd6f7c556ffbe4129a2d21cab11f961\"" Feb 13 15:23:35.122283 systemd[1]: Started cri-containerd-e932f7fa73603e36b3c647547dbba548efd6f7c556ffbe4129a2d21cab11f961.scope - libcontainer container e932f7fa73603e36b3c647547dbba548efd6f7c556ffbe4129a2d21cab11f961. Feb 13 15:23:35.174248 containerd[1944]: time="2025-02-13T15:23:35.174159335Z" level=info msg="StartContainer for \"e932f7fa73603e36b3c647547dbba548efd6f7c556ffbe4129a2d21cab11f961\" returns successfully" Feb 13 15:23:35.430111 kubelet[2426]: E0213 15:23:35.430054 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:36.430800 kubelet[2426]: E0213 15:23:36.430441 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:36.575364 containerd[1944]: time="2025-02-13T15:23:36.575287838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:36.583934 containerd[1944]: time="2025-02-13T15:23:36.582665678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 15:23:36.583934 containerd[1944]: time="2025-02-13T15:23:36.582866546Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:36.591052 containerd[1944]: time="2025-02-13T15:23:36.590982338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:36.592451 containerd[1944]: time="2025-02-13T15:23:36.592389470Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.567054268s" Feb 13 15:23:36.592451 containerd[1944]: time="2025-02-13T15:23:36.592445522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 15:23:36.598296 containerd[1944]: time="2025-02-13T15:23:36.598229390Z" level=info msg="CreateContainer within sandbox \"10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:23:36.632271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3539544395.mount: Deactivated successfully. Feb 13 15:23:36.639672 containerd[1944]: time="2025-02-13T15:23:36.639595958Z" level=info msg="CreateContainer within sandbox \"10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2d52316ae61000e6096cf93bbbc9efd4f057e0688ded0d8a9965d0caf6958179\"" Feb 13 15:23:36.641049 containerd[1944]: time="2025-02-13T15:23:36.640978322Z" level=info msg="StartContainer for \"2d52316ae61000e6096cf93bbbc9efd4f057e0688ded0d8a9965d0caf6958179\"" Feb 13 15:23:36.705216 systemd[1]: Started cri-containerd-2d52316ae61000e6096cf93bbbc9efd4f057e0688ded0d8a9965d0caf6958179.scope - libcontainer container 2d52316ae61000e6096cf93bbbc9efd4f057e0688ded0d8a9965d0caf6958179. Feb 13 15:23:36.759975 containerd[1944]: time="2025-02-13T15:23:36.759916683Z" level=info msg="StartContainer for \"2d52316ae61000e6096cf93bbbc9efd4f057e0688ded0d8a9965d0caf6958179\" returns successfully" Feb 13 15:23:36.762317 containerd[1944]: time="2025-02-13T15:23:36.762255627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:23:37.431125 kubelet[2426]: E0213 15:23:37.431061 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:38.358593 containerd[1944]: time="2025-02-13T15:23:38.358537058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:38.361751 containerd[1944]: time="2025-02-13T15:23:38.361561623Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 15:23:38.364546 containerd[1944]: time="2025-02-13T15:23:38.364488375Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:38.370966 containerd[1944]: time="2025-02-13T15:23:38.370901535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:38.372610 containerd[1944]: time="2025-02-13T15:23:38.372552819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.610166752s" Feb 13 15:23:38.372704 containerd[1944]: time="2025-02-13T15:23:38.372606771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 15:23:38.376863 containerd[1944]: time="2025-02-13T15:23:38.376796271Z" level=info msg="CreateContainer within sandbox \"10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:23:38.407751 containerd[1944]: time="2025-02-13T15:23:38.407685807Z" level=info msg="CreateContainer within sandbox \"10a98623908676d2d7e77a127d9844b3caa43ea43b651f350be8ed15333965e1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d2bc36131365877734974dc163ea3c516543e80f4add4dcdf7fad591089185de\"" Feb 13 15:23:38.408438 containerd[1944]: time="2025-02-13T15:23:38.408391191Z" level=info msg="StartContainer for \"d2bc36131365877734974dc163ea3c516543e80f4add4dcdf7fad591089185de\"" Feb 13 15:23:38.432994 kubelet[2426]: E0213 15:23:38.432185 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:38.463293 systemd[1]: Started cri-containerd-d2bc36131365877734974dc163ea3c516543e80f4add4dcdf7fad591089185de.scope - libcontainer container d2bc36131365877734974dc163ea3c516543e80f4add4dcdf7fad591089185de. Feb 13 15:23:38.525401 containerd[1944]: time="2025-02-13T15:23:38.524545371Z" level=info msg="StartContainer for \"d2bc36131365877734974dc163ea3c516543e80f4add4dcdf7fad591089185de\" returns successfully" Feb 13 15:23:38.666757 kubelet[2426]: I0213 15:23:38.666630 2426 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:23:38.667130 kubelet[2426]: I0213 15:23:38.666944 2426 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:23:39.034264 kubelet[2426]: I0213 15:23:39.034051 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gw5tb" podStartSLOduration=23.21766683 podStartE2EDuration="32.034023242s" podCreationTimestamp="2025-02-13 15:23:07 +0000 UTC" firstStartedPulling="2025-02-13 15:23:29.558212299 +0000 UTC m=+24.742123060" lastFinishedPulling="2025-02-13 15:23:38.374568699 +0000 UTC m=+33.558479472" observedRunningTime="2025-02-13 15:23:39.032085302 +0000 UTC m=+34.215996087" watchObservedRunningTime="2025-02-13 15:23:39.034023242 +0000 UTC m=+34.217934027" Feb 13 15:23:39.034649 kubelet[2426]: I0213 15:23:39.034292 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-8fn8f" podStartSLOduration=13.423424282 podStartE2EDuration="19.034281086s" podCreationTimestamp="2025-02-13 15:23:20 +0000 UTC" firstStartedPulling="2025-02-13 15:23:29.412976322 +0000 UTC m=+24.596887083" lastFinishedPulling="2025-02-13 15:23:35.023833138 +0000 UTC m=+30.207743887" observedRunningTime="2025-02-13 15:23:36.009994919 +0000 UTC m=+31.193905692" watchObservedRunningTime="2025-02-13 15:23:39.034281086 +0000 UTC m=+34.218191847" Feb 13 15:23:39.432951 kubelet[2426]: E0213 15:23:39.432871 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:40.433107 kubelet[2426]: E0213 15:23:40.433048 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:41.433627 kubelet[2426]: E0213 15:23:41.433554 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:41.914122 systemd[1]: Created slice kubepods-besteffort-pod6ba0495c_d695_4fef_95fb_833f66fca8bc.slice - libcontainer container kubepods-besteffort-pod6ba0495c_d695_4fef_95fb_833f66fca8bc.slice. Feb 13 15:23:42.073411 kubelet[2426]: I0213 15:23:42.073348 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw72h\" (UniqueName: \"kubernetes.io/projected/6ba0495c-d695-4fef-95fb-833f66fca8bc-kube-api-access-kw72h\") pod \"nfs-server-provisioner-0\" (UID: \"6ba0495c-d695-4fef-95fb-833f66fca8bc\") " pod="default/nfs-server-provisioner-0" Feb 13 15:23:42.073587 kubelet[2426]: I0213 15:23:42.073422 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/6ba0495c-d695-4fef-95fb-833f66fca8bc-data\") pod \"nfs-server-provisioner-0\" (UID: \"6ba0495c-d695-4fef-95fb-833f66fca8bc\") " pod="default/nfs-server-provisioner-0" Feb 13 15:23:42.220251 containerd[1944]: time="2025-02-13T15:23:42.220063194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6ba0495c-d695-4fef-95fb-833f66fca8bc,Namespace:default,Attempt:0,}" Feb 13 15:23:42.433836 kubelet[2426]: E0213 15:23:42.433771 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:42.459718 (udev-worker)[4198]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:23:42.460059 systemd-networkd[1852]: cali60e51b789ff: Link UP Feb 13 15:23:42.460525 systemd-networkd[1852]: cali60e51b789ff: Gained carrier Feb 13 15:23:42.491718 containerd[1944]: 2025-02-13 15:23:42.312 [INFO][4203] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.20.64-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 6ba0495c-d695-4fef-95fb-833f66fca8bc 1223 0 2025-02-13 15:23:41 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.20.64 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.64-k8s-nfs--server--provisioner--0-" Feb 13 15:23:42.491718 containerd[1944]: 2025-02-13 15:23:42.313 [INFO][4203] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.64-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:23:42.491718 containerd[1944]: 2025-02-13 15:23:42.368 [INFO][4214] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73" HandleID="k8s-pod-network.f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73" Workload="172.31.20.64-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:23:42.491718 containerd[1944]: 2025-02-13 15:23:42.400 [INFO][4214] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73" HandleID="k8s-pod-network.f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73" Workload="172.31.20.64-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cb70), Attrs:map[string]string{"namespace":"default", "node":"172.31.20.64", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 15:23:42.368444766 +0000 UTC"}, Hostname:"172.31.20.64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:23:42.491718 containerd[1944]: 2025-02-13 15:23:42.401 [INFO][4214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:23:42.491718 containerd[1944]: 2025-02-13 15:23:42.401 [INFO][4214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:23:42.491718 containerd[1944]: 2025-02-13 15:23:42.401 [INFO][4214] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.20.64' Feb 13 15:23:42.491718 containerd[1944]: 2025-02-13 15:23:42.404 [INFO][4214] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73" host="172.31.20.64" Feb 13 15:23:42.491718 containerd[1944]: 2025-02-13 15:23:42.412 [INFO][4214] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.20.64" Feb 13 15:23:42.491718 containerd[1944]: 2025-02-13 15:23:42.419 [INFO][4214] ipam/ipam.go 489: Trying affinity for 192.168.20.128/26 host="172.31.20.64" Feb 13 15:23:42.491718 containerd[1944]: 2025-02-13 15:23:42.422 [INFO][4214] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.128/26 host="172.31.20.64" Feb 13 15:23:42.491718 containerd[1944]: 2025-02-13 15:23:42.426 [INFO][4214] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.128/26 host="172.31.20.64" Feb 13 15:23:42.491718 containerd[1944]: 2025-02-13 15:23:42.426 [INFO][4214] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.128/26 handle="k8s-pod-network.f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73" host="172.31.20.64" Feb 13 15:23:42.491718 containerd[1944]: 2025-02-13 15:23:42.428 [INFO][4214] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73 Feb 13 15:23:42.491718 containerd[1944]: 2025-02-13 15:23:42.435 [INFO][4214] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.128/26 handle="k8s-pod-network.f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73" host="172.31.20.64" Feb 13 15:23:42.491718 containerd[1944]: 2025-02-13 15:23:42.449 [INFO][4214] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.131/26] block=192.168.20.128/26 handle="k8s-pod-network.f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73" host="172.31.20.64" Feb 13 15:23:42.491718 containerd[1944]: 2025-02-13 15:23:42.449 [INFO][4214] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.131/26] handle="k8s-pod-network.f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73" host="172.31.20.64" Feb 13 15:23:42.491718 containerd[1944]: 2025-02-13 15:23:42.449 [INFO][4214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:23:42.491718 containerd[1944]: 2025-02-13 15:23:42.449 [INFO][4214] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.131/26] IPv6=[] ContainerID="f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73" HandleID="k8s-pod-network.f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73" Workload="172.31.20.64-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:23:42.493645 containerd[1944]: 2025-02-13 15:23:42.452 [INFO][4203] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.64-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.64-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"6ba0495c-d695-4fef-95fb-833f66fca8bc", ResourceVersion:"1223", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.64", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.20.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:23:42.493645 containerd[1944]: 2025-02-13 15:23:42.453 [INFO][4203] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.131/32] ContainerID="f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.64-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:23:42.493645 containerd[1944]: 2025-02-13 15:23:42.453 [INFO][4203] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.64-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:23:42.493645 containerd[1944]: 2025-02-13 15:23:42.458 [INFO][4203] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.64-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:23:42.494867 containerd[1944]: 2025-02-13 15:23:42.459 [INFO][4203] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.64-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.64-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"6ba0495c-d695-4fef-95fb-833f66fca8bc", ResourceVersion:"1223", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.64", ContainerID:"f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.20.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"4a:71:05:82:21:df", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:23:42.494867 containerd[1944]: 2025-02-13 15:23:42.486 [INFO][4203] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.64-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:23:42.532010 containerd[1944]: time="2025-02-13T15:23:42.531546955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:23:42.532010 containerd[1944]: time="2025-02-13T15:23:42.531691231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:23:42.532010 containerd[1944]: time="2025-02-13T15:23:42.531718435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:42.532578 containerd[1944]: time="2025-02-13T15:23:42.531978283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:42.580213 systemd[1]: Started cri-containerd-f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73.scope - libcontainer container f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73. Feb 13 15:23:42.639028 containerd[1944]: time="2025-02-13T15:23:42.638969324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6ba0495c-d695-4fef-95fb-833f66fca8bc,Namespace:default,Attempt:0,} returns sandbox id \"f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73\"" Feb 13 15:23:42.642439 containerd[1944]: time="2025-02-13T15:23:42.642363176Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 15:23:43.434244 kubelet[2426]: E0213 15:23:43.434002 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:43.705236 systemd-networkd[1852]: cali60e51b789ff: Gained IPv6LL Feb 13 15:23:44.434422 kubelet[2426]: E0213 15:23:44.434195 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:45.301486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2718525274.mount: Deactivated successfully. Feb 13 15:23:45.435129 kubelet[2426]: E0213 15:23:45.434930 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:46.268280 ntpd[1920]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 15:23:46.270813 ntpd[1920]: 13 Feb 15:23:46 ntpd[1920]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 15:23:46.435826 kubelet[2426]: E0213 15:23:46.435778 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:47.402911 kubelet[2426]: E0213 15:23:47.402804 2426 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:47.438917 kubelet[2426]: E0213 15:23:47.438848 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:48.342841 containerd[1944]: time="2025-02-13T15:23:48.342730584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:48.344921 containerd[1944]: time="2025-02-13T15:23:48.344816664Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Feb 13 15:23:48.347335 containerd[1944]: time="2025-02-13T15:23:48.347233260Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:48.353042 containerd[1944]: time="2025-02-13T15:23:48.352942056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:48.355119 containerd[1944]: time="2025-02-13T15:23:48.355053816Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.712607672s" Feb 13 15:23:48.355913 containerd[1944]: time="2025-02-13T15:23:48.355114980Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 13 15:23:48.360439 containerd[1944]: time="2025-02-13T15:23:48.360229620Z" level=info msg="CreateContainer within sandbox \"f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 15:23:48.390374 containerd[1944]: time="2025-02-13T15:23:48.390316728Z" level=info msg="CreateContainer within sandbox \"f28aa29855c3ba91df876936f11794768a134d98052a08b36510ce53da2ffc73\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"d571a3fa34829a0e1ae2e995f144f351d2e4a64d811320ddb66d5a8109b62dde\"" Feb 13 15:23:48.391406 containerd[1944]: time="2025-02-13T15:23:48.391294992Z" level=info msg="StartContainer for \"d571a3fa34829a0e1ae2e995f144f351d2e4a64d811320ddb66d5a8109b62dde\"" Feb 13 15:23:48.440073 kubelet[2426]: E0213 15:23:48.439999 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:48.452249 systemd[1]: Started cri-containerd-d571a3fa34829a0e1ae2e995f144f351d2e4a64d811320ddb66d5a8109b62dde.scope - libcontainer container d571a3fa34829a0e1ae2e995f144f351d2e4a64d811320ddb66d5a8109b62dde. Feb 13 15:23:48.498734 containerd[1944]: time="2025-02-13T15:23:48.498643201Z" level=info msg="StartContainer for \"d571a3fa34829a0e1ae2e995f144f351d2e4a64d811320ddb66d5a8109b62dde\" returns successfully" Feb 13 15:23:49.058035 kubelet[2426]: I0213 15:23:49.057951 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.3417713239999998 podStartE2EDuration="8.057930876s" podCreationTimestamp="2025-02-13 15:23:41 +0000 UTC" firstStartedPulling="2025-02-13 15:23:42.641200328 +0000 UTC m=+37.825111077" lastFinishedPulling="2025-02-13 15:23:48.357359868 +0000 UTC m=+43.541270629" observedRunningTime="2025-02-13 15:23:49.057622848 +0000 UTC m=+44.241533633" watchObservedRunningTime="2025-02-13 15:23:49.057930876 +0000 UTC m=+44.241841661" Feb 13 15:23:49.441035 kubelet[2426]: E0213 15:23:49.440948 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:50.441799 kubelet[2426]: E0213 15:23:50.441726 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:51.441950 kubelet[2426]: E0213 15:23:51.441898 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:52.442298 kubelet[2426]: E0213 15:23:52.442225 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:53.442770 kubelet[2426]: E0213 15:23:53.442706 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:54.443710 kubelet[2426]: E0213 15:23:54.443654 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:55.444895 kubelet[2426]: E0213 15:23:55.444812 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:56.445269 kubelet[2426]: E0213 15:23:56.445202 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:57.446301 kubelet[2426]: E0213 15:23:57.446240 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:58.446866 kubelet[2426]: E0213 15:23:58.446812 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:23:59.448247 kubelet[2426]: E0213 15:23:59.448176 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:00.448946 kubelet[2426]: E0213 15:24:00.448861 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:01.449461 kubelet[2426]: E0213 15:24:01.449366 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:02.449805 kubelet[2426]: E0213 15:24:02.449752 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:03.450974 kubelet[2426]: E0213 15:24:03.450900 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:04.452088 kubelet[2426]: E0213 15:24:04.452028 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:05.452849 kubelet[2426]: E0213 15:24:05.452787 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:06.453982 kubelet[2426]: E0213 15:24:06.453923 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:07.402502 kubelet[2426]: E0213 15:24:07.402439 2426 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:07.438484 containerd[1944]: time="2025-02-13T15:24:07.438408331Z" level=info msg="StopPodSandbox for \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\"" Feb 13 15:24:07.439239 containerd[1944]: time="2025-02-13T15:24:07.438661483Z" level=info msg="TearDown network for sandbox \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\" successfully" Feb 13 15:24:07.439239 containerd[1944]: time="2025-02-13T15:24:07.438710731Z" level=info msg="StopPodSandbox for \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\" returns successfully" Feb 13 15:24:07.440187 containerd[1944]: time="2025-02-13T15:24:07.440138695Z" level=info msg="RemovePodSandbox for \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\"" Feb 13 15:24:07.440310 containerd[1944]: time="2025-02-13T15:24:07.440195731Z" level=info msg="Forcibly stopping sandbox \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\"" Feb 13 15:24:07.440444 containerd[1944]: time="2025-02-13T15:24:07.440410699Z" level=info msg="TearDown network for sandbox \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\" successfully" Feb 13 15:24:07.447679 containerd[1944]: time="2025-02-13T15:24:07.447620299Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:24:07.447679 containerd[1944]: time="2025-02-13T15:24:07.447704851Z" level=info msg="RemovePodSandbox \"03f00dd6d90716c5f9fc978851c35c91beb136101fab7b75b16a0ac8fe6c9422\" returns successfully" Feb 13 15:24:07.448459 containerd[1944]: time="2025-02-13T15:24:07.448342219Z" level=info msg="StopPodSandbox for \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\"" Feb 13 15:24:07.448920 containerd[1944]: time="2025-02-13T15:24:07.448504663Z" level=info msg="TearDown network for sandbox \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\" successfully" Feb 13 15:24:07.448920 containerd[1944]: time="2025-02-13T15:24:07.448528783Z" level=info msg="StopPodSandbox for \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\" returns successfully" Feb 13 15:24:07.450596 containerd[1944]: time="2025-02-13T15:24:07.449259583Z" level=info msg="RemovePodSandbox for \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\"" Feb 13 15:24:07.450596 containerd[1944]: time="2025-02-13T15:24:07.449302375Z" level=info msg="Forcibly stopping sandbox \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\"" Feb 13 15:24:07.450596 containerd[1944]: time="2025-02-13T15:24:07.449422867Z" level=info msg="TearDown network for sandbox \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\" successfully" Feb 13 15:24:07.454915 kubelet[2426]: E0213 15:24:07.454772 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:07.456487 containerd[1944]: time="2025-02-13T15:24:07.454908127Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:24:07.456487 containerd[1944]: time="2025-02-13T15:24:07.454976899Z" level=info msg="RemovePodSandbox \"c23272851f372c64db61abe72134ec380ce7ec67c1dd5678a300440c52404735\" returns successfully" Feb 13 15:24:07.456487 containerd[1944]: time="2025-02-13T15:24:07.455508847Z" level=info msg="StopPodSandbox for \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\"" Feb 13 15:24:07.456487 containerd[1944]: time="2025-02-13T15:24:07.455654035Z" level=info msg="TearDown network for sandbox \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\" successfully" Feb 13 15:24:07.456487 containerd[1944]: time="2025-02-13T15:24:07.455682391Z" level=info msg="StopPodSandbox for \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\" returns successfully" Feb 13 15:24:07.457277 containerd[1944]: time="2025-02-13T15:24:07.457127455Z" level=info msg="RemovePodSandbox for \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\"" Feb 13 15:24:07.457277 containerd[1944]: time="2025-02-13T15:24:07.457182775Z" level=info msg="Forcibly stopping sandbox \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\"" Feb 13 15:24:07.457554 containerd[1944]: time="2025-02-13T15:24:07.457315135Z" level=info msg="TearDown network for sandbox \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\" successfully" Feb 13 15:24:07.462630 containerd[1944]: time="2025-02-13T15:24:07.462574207Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:24:07.462775 containerd[1944]: time="2025-02-13T15:24:07.462658135Z" level=info msg="RemovePodSandbox \"d3f938a22b65c6ce0b8a66e46472d486425a77af54cdaea76f80f02e44abb65e\" returns successfully" Feb 13 15:24:07.463545 containerd[1944]: time="2025-02-13T15:24:07.463414639Z" level=info msg="StopPodSandbox for \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\"" Feb 13 15:24:07.464008 containerd[1944]: time="2025-02-13T15:24:07.463775743Z" level=info msg="TearDown network for sandbox \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\" successfully" Feb 13 15:24:07.464008 containerd[1944]: time="2025-02-13T15:24:07.463809583Z" level=info msg="StopPodSandbox for \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\" returns successfully" Feb 13 15:24:07.464780 containerd[1944]: time="2025-02-13T15:24:07.464737207Z" level=info msg="RemovePodSandbox for \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\"" Feb 13 15:24:07.464863 containerd[1944]: time="2025-02-13T15:24:07.464789503Z" level=info msg="Forcibly stopping sandbox \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\"" Feb 13 15:24:07.465032 containerd[1944]: time="2025-02-13T15:24:07.464998975Z" level=info msg="TearDown network for sandbox \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\" successfully" Feb 13 15:24:07.470443 containerd[1944]: time="2025-02-13T15:24:07.470376691Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:24:07.470582 containerd[1944]: time="2025-02-13T15:24:07.470461183Z" level=info msg="RemovePodSandbox \"c5d5757ac7eafc3dddebab8ac3c8bcd09bbf365c6d1709eeccaaf945e5e3f8f8\" returns successfully" Feb 13 15:24:07.471457 containerd[1944]: time="2025-02-13T15:24:07.471180775Z" level=info msg="StopPodSandbox for \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\"" Feb 13 15:24:07.471457 containerd[1944]: time="2025-02-13T15:24:07.471334363Z" level=info msg="TearDown network for sandbox \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\" successfully" Feb 13 15:24:07.471457 containerd[1944]: time="2025-02-13T15:24:07.471356119Z" level=info msg="StopPodSandbox for \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\" returns successfully" Feb 13 15:24:07.471914 containerd[1944]: time="2025-02-13T15:24:07.471855151Z" level=info msg="RemovePodSandbox for \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\"" Feb 13 15:24:07.472534 containerd[1944]: time="2025-02-13T15:24:07.472030111Z" level=info msg="Forcibly stopping sandbox \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\"" Feb 13 15:24:07.472534 containerd[1944]: time="2025-02-13T15:24:07.472157671Z" level=info msg="TearDown network for sandbox \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\" successfully" Feb 13 15:24:07.477595 containerd[1944]: time="2025-02-13T15:24:07.477528883Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:24:07.478180 containerd[1944]: time="2025-02-13T15:24:07.477606463Z" level=info msg="RemovePodSandbox \"05a2e4face135efd1c8935d300e6ca2bb75d57767830b6f0b290e90c892060d9\" returns successfully" Feb 13 15:24:07.478835 containerd[1944]: time="2025-02-13T15:24:07.478544179Z" level=info msg="StopPodSandbox for \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\"" Feb 13 15:24:07.478835 containerd[1944]: time="2025-02-13T15:24:07.478704319Z" level=info msg="TearDown network for sandbox \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\" successfully" Feb 13 15:24:07.478835 containerd[1944]: time="2025-02-13T15:24:07.478725991Z" level=info msg="StopPodSandbox for \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\" returns successfully" Feb 13 15:24:07.479545 containerd[1944]: time="2025-02-13T15:24:07.479509267Z" level=info msg="RemovePodSandbox for \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\"" Feb 13 15:24:07.479944 containerd[1944]: time="2025-02-13T15:24:07.479657311Z" level=info msg="Forcibly stopping sandbox \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\"" Feb 13 15:24:07.479944 containerd[1944]: time="2025-02-13T15:24:07.479787691Z" level=info msg="TearDown network for sandbox \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\" successfully" Feb 13 15:24:07.485213 containerd[1944]: time="2025-02-13T15:24:07.485165011Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:24:07.485558 containerd[1944]: time="2025-02-13T15:24:07.485416231Z" level=info msg="RemovePodSandbox \"601d529e4fdc5bccbb379825c5c3a989ba5da8f12fc31a661797034fe739301a\" returns successfully" Feb 13 15:24:07.486433 containerd[1944]: time="2025-02-13T15:24:07.486152395Z" level=info msg="StopPodSandbox for \"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\"" Feb 13 15:24:07.486433 containerd[1944]: time="2025-02-13T15:24:07.486313051Z" level=info msg="TearDown network for sandbox \"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\" successfully" Feb 13 15:24:07.486433 containerd[1944]: time="2025-02-13T15:24:07.486334267Z" level=info msg="StopPodSandbox for \"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\" returns successfully" Feb 13 15:24:07.488952 containerd[1944]: time="2025-02-13T15:24:07.487518307Z" level=info msg="RemovePodSandbox for \"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\"" Feb 13 15:24:07.488952 containerd[1944]: time="2025-02-13T15:24:07.487577491Z" level=info msg="Forcibly stopping sandbox \"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\"" Feb 13 15:24:07.488952 containerd[1944]: time="2025-02-13T15:24:07.487729255Z" level=info msg="TearDown network for sandbox \"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\" successfully" Feb 13 15:24:07.500218 containerd[1944]: time="2025-02-13T15:24:07.500121235Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:24:07.500349 containerd[1944]: time="2025-02-13T15:24:07.500252875Z" level=info msg="RemovePodSandbox \"e44a9e4af0c6ace48c42a9c1b60f03d57bd61f0c824869cacb67d743da7adc31\" returns successfully" Feb 13 15:24:07.503143 containerd[1944]: time="2025-02-13T15:24:07.503062675Z" level=info msg="StopPodSandbox for \"435893129d9cf410050ff104060c6dda0b96db48e590fcf1527c64ac80722735\"" Feb 13 15:24:07.503744 containerd[1944]: time="2025-02-13T15:24:07.503704903Z" level=info msg="TearDown network for sandbox \"435893129d9cf410050ff104060c6dda0b96db48e590fcf1527c64ac80722735\" successfully" Feb 13 15:24:07.504442 containerd[1944]: time="2025-02-13T15:24:07.504276487Z" level=info msg="StopPodSandbox for \"435893129d9cf410050ff104060c6dda0b96db48e590fcf1527c64ac80722735\" returns successfully" Feb 13 15:24:07.505021 containerd[1944]: time="2025-02-13T15:24:07.504959335Z" level=info msg="RemovePodSandbox for \"435893129d9cf410050ff104060c6dda0b96db48e590fcf1527c64ac80722735\"" Feb 13 15:24:07.505021 containerd[1944]: time="2025-02-13T15:24:07.505016923Z" level=info msg="Forcibly stopping sandbox \"435893129d9cf410050ff104060c6dda0b96db48e590fcf1527c64ac80722735\"" Feb 13 15:24:07.506087 containerd[1944]: time="2025-02-13T15:24:07.505162675Z" level=info msg="TearDown network for sandbox \"435893129d9cf410050ff104060c6dda0b96db48e590fcf1527c64ac80722735\" successfully" Feb 13 15:24:07.521707 containerd[1944]: time="2025-02-13T15:24:07.521641687Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"435893129d9cf410050ff104060c6dda0b96db48e590fcf1527c64ac80722735\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:24:07.521899 containerd[1944]: time="2025-02-13T15:24:07.521722135Z" level=info msg="RemovePodSandbox \"435893129d9cf410050ff104060c6dda0b96db48e590fcf1527c64ac80722735\" returns successfully" Feb 13 15:24:07.522597 containerd[1944]: time="2025-02-13T15:24:07.522548887Z" level=info msg="StopPodSandbox for \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\"" Feb 13 15:24:07.522896 containerd[1944]: time="2025-02-13T15:24:07.522831055Z" level=info msg="TearDown network for sandbox \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\" successfully" Feb 13 15:24:07.522980 containerd[1944]: time="2025-02-13T15:24:07.522868195Z" level=info msg="StopPodSandbox for \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\" returns successfully" Feb 13 15:24:07.523567 containerd[1944]: time="2025-02-13T15:24:07.523422595Z" level=info msg="RemovePodSandbox for \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\"" Feb 13 15:24:07.523567 containerd[1944]: time="2025-02-13T15:24:07.523464847Z" level=info msg="Forcibly stopping sandbox \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\"" Feb 13 15:24:07.523792 containerd[1944]: time="2025-02-13T15:24:07.523586575Z" level=info msg="TearDown network for sandbox \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\" successfully" Feb 13 15:24:07.528998 containerd[1944]: time="2025-02-13T15:24:07.528928027Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:24:07.529385 containerd[1944]: time="2025-02-13T15:24:07.529005919Z" level=info msg="RemovePodSandbox \"79d73380d5d018c801b6442bd20f050b83777e671b4f5d871e5d2a212d44c606\" returns successfully" Feb 13 15:24:07.530231 containerd[1944]: time="2025-02-13T15:24:07.529970647Z" level=info msg="StopPodSandbox for \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\"" Feb 13 15:24:07.530231 containerd[1944]: time="2025-02-13T15:24:07.530129623Z" level=info msg="TearDown network for sandbox \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\" successfully" Feb 13 15:24:07.530231 containerd[1944]: time="2025-02-13T15:24:07.530150695Z" level=info msg="StopPodSandbox for \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\" returns successfully" Feb 13 15:24:07.530712 containerd[1944]: time="2025-02-13T15:24:07.530644759Z" level=info msg="RemovePodSandbox for \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\"" Feb 13 15:24:07.530712 containerd[1944]: time="2025-02-13T15:24:07.530683003Z" level=info msg="Forcibly stopping sandbox \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\"" Feb 13 15:24:07.530829 containerd[1944]: time="2025-02-13T15:24:07.530804839Z" level=info msg="TearDown network for sandbox \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\" successfully" Feb 13 15:24:07.536200 containerd[1944]: time="2025-02-13T15:24:07.536127175Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:24:07.536372 containerd[1944]: time="2025-02-13T15:24:07.536213107Z" level=info msg="RemovePodSandbox \"511406098846da416d3c7e8976722d3a157b87383e81fe3ab572965f834d6664\" returns successfully" Feb 13 15:24:07.537122 containerd[1944]: time="2025-02-13T15:24:07.537063259Z" level=info msg="StopPodSandbox for \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\"" Feb 13 15:24:07.537264 containerd[1944]: time="2025-02-13T15:24:07.537231007Z" level=info msg="TearDown network for sandbox \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\" successfully" Feb 13 15:24:07.537322 containerd[1944]: time="2025-02-13T15:24:07.537265471Z" level=info msg="StopPodSandbox for \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\" returns successfully" Feb 13 15:24:07.537969 containerd[1944]: time="2025-02-13T15:24:07.537932551Z" level=info msg="RemovePodSandbox for \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\"" Feb 13 15:24:07.538929 containerd[1944]: time="2025-02-13T15:24:07.538236655Z" level=info msg="Forcibly stopping sandbox \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\"" Feb 13 15:24:07.538929 containerd[1944]: time="2025-02-13T15:24:07.538372087Z" level=info msg="TearDown network for sandbox \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\" successfully" Feb 13 15:24:07.543678 containerd[1944]: time="2025-02-13T15:24:07.543608635Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:24:07.543784 containerd[1944]: time="2025-02-13T15:24:07.543686683Z" level=info msg="RemovePodSandbox \"a3a555c1723c91e36ec14f8d890cb1a172147213843c8f22195bbe9e2e76f591\" returns successfully" Feb 13 15:24:07.544493 containerd[1944]: time="2025-02-13T15:24:07.544435111Z" level=info msg="StopPodSandbox for \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\"" Feb 13 15:24:07.544723 containerd[1944]: time="2025-02-13T15:24:07.544601935Z" level=info msg="TearDown network for sandbox \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\" successfully" Feb 13 15:24:07.544723 containerd[1944]: time="2025-02-13T15:24:07.544636915Z" level=info msg="StopPodSandbox for \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\" returns successfully" Feb 13 15:24:07.545391 containerd[1944]: time="2025-02-13T15:24:07.545287171Z" level=info msg="RemovePodSandbox for \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\"" Feb 13 15:24:07.545391 containerd[1944]: time="2025-02-13T15:24:07.545332375Z" level=info msg="Forcibly stopping sandbox \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\"" Feb 13 15:24:07.545599 containerd[1944]: time="2025-02-13T15:24:07.545457151Z" level=info msg="TearDown network for sandbox \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\" successfully" Feb 13 15:24:07.551129 containerd[1944]: time="2025-02-13T15:24:07.550810783Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:24:07.551129 containerd[1944]: time="2025-02-13T15:24:07.550912471Z" level=info msg="RemovePodSandbox \"290d70d4e6347670cdf229848b5cdaaec3a57835815e4267665493689c506557\" returns successfully" Feb 13 15:24:07.551835 containerd[1944]: time="2025-02-13T15:24:07.551544056Z" level=info msg="StopPodSandbox for \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\"" Feb 13 15:24:07.551835 containerd[1944]: time="2025-02-13T15:24:07.551701820Z" level=info msg="TearDown network for sandbox \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\" successfully" Feb 13 15:24:07.551835 containerd[1944]: time="2025-02-13T15:24:07.551724488Z" level=info msg="StopPodSandbox for \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\" returns successfully" Feb 13 15:24:07.552301 containerd[1944]: time="2025-02-13T15:24:07.552241412Z" level=info msg="RemovePodSandbox for \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\"" Feb 13 15:24:07.552386 containerd[1944]: time="2025-02-13T15:24:07.552295904Z" level=info msg="Forcibly stopping sandbox \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\"" Feb 13 15:24:07.552454 containerd[1944]: time="2025-02-13T15:24:07.552423920Z" level=info msg="TearDown network for sandbox \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\" successfully" Feb 13 15:24:07.559663 containerd[1944]: time="2025-02-13T15:24:07.559581056Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:24:07.559815 containerd[1944]: time="2025-02-13T15:24:07.559667480Z" level=info msg="RemovePodSandbox \"ccfe6d437322ad93912db15756fdac0b173180fc8ab7f928ce5b2787f6195568\" returns successfully" Feb 13 15:24:07.560556 containerd[1944]: time="2025-02-13T15:24:07.560486240Z" level=info msg="StopPodSandbox for \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\"" Feb 13 15:24:07.560680 containerd[1944]: time="2025-02-13T15:24:07.560648096Z" level=info msg="TearDown network for sandbox \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\" successfully" Feb 13 15:24:07.560680 containerd[1944]: time="2025-02-13T15:24:07.560670980Z" level=info msg="StopPodSandbox for \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\" returns successfully" Feb 13 15:24:07.561976 containerd[1944]: time="2025-02-13T15:24:07.561933428Z" level=info msg="RemovePodSandbox for \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\"" Feb 13 15:24:07.562399 containerd[1944]: time="2025-02-13T15:24:07.562178204Z" level=info msg="Forcibly stopping sandbox \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\"" Feb 13 15:24:07.562399 containerd[1944]: time="2025-02-13T15:24:07.562320944Z" level=info msg="TearDown network for sandbox \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\" successfully" Feb 13 15:24:07.569101 containerd[1944]: time="2025-02-13T15:24:07.568713068Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:24:07.569101 containerd[1944]: time="2025-02-13T15:24:07.568969820Z" level=info msg="RemovePodSandbox \"2dc2eab864c9453077eab0cf366b2a94e0660b32abdda90fd517b62b0dc2ddcb\" returns successfully" Feb 13 15:24:07.570256 containerd[1944]: time="2025-02-13T15:24:07.569704148Z" level=info msg="StopPodSandbox for \"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\"" Feb 13 15:24:07.570256 containerd[1944]: time="2025-02-13T15:24:07.569869688Z" level=info msg="TearDown network for sandbox \"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\" successfully" Feb 13 15:24:07.570256 containerd[1944]: time="2025-02-13T15:24:07.569918840Z" level=info msg="StopPodSandbox for \"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\" returns successfully" Feb 13 15:24:07.571729 containerd[1944]: time="2025-02-13T15:24:07.570675620Z" level=info msg="RemovePodSandbox for \"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\"" Feb 13 15:24:07.571729 containerd[1944]: time="2025-02-13T15:24:07.570730616Z" level=info msg="Forcibly stopping sandbox \"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\"" Feb 13 15:24:07.571729 containerd[1944]: time="2025-02-13T15:24:07.570960248Z" level=info msg="TearDown network for sandbox \"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\" successfully" Feb 13 15:24:07.577757 containerd[1944]: time="2025-02-13T15:24:07.577547480Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:24:07.577757 containerd[1944]: time="2025-02-13T15:24:07.577626356Z" level=info msg="RemovePodSandbox \"63bb49fd2fdcd256a454327194d710ed8d76a101493e24a6d68a2a1a48d3be34\" returns successfully" Feb 13 15:24:07.578487 containerd[1944]: time="2025-02-13T15:24:07.578410088Z" level=info msg="StopPodSandbox for \"d1c9ad9216c5b99114515489ad4a9ed9236498f84dcf6e30ea14141a7dd714b5\"" Feb 13 15:24:07.578643 containerd[1944]: time="2025-02-13T15:24:07.578598728Z" level=info msg="TearDown network for sandbox \"d1c9ad9216c5b99114515489ad4a9ed9236498f84dcf6e30ea14141a7dd714b5\" successfully" Feb 13 15:24:07.578704 containerd[1944]: time="2025-02-13T15:24:07.578637752Z" level=info msg="StopPodSandbox for \"d1c9ad9216c5b99114515489ad4a9ed9236498f84dcf6e30ea14141a7dd714b5\" returns successfully" Feb 13 15:24:07.580064 containerd[1944]: time="2025-02-13T15:24:07.579988436Z" level=info msg="RemovePodSandbox for \"d1c9ad9216c5b99114515489ad4a9ed9236498f84dcf6e30ea14141a7dd714b5\"" Feb 13 15:24:07.580170 containerd[1944]: time="2025-02-13T15:24:07.580062692Z" level=info msg="Forcibly stopping sandbox \"d1c9ad9216c5b99114515489ad4a9ed9236498f84dcf6e30ea14141a7dd714b5\"" Feb 13 15:24:07.580222 containerd[1944]: time="2025-02-13T15:24:07.580191920Z" level=info msg="TearDown network for sandbox \"d1c9ad9216c5b99114515489ad4a9ed9236498f84dcf6e30ea14141a7dd714b5\" successfully" Feb 13 15:24:07.586094 containerd[1944]: time="2025-02-13T15:24:07.585993476Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1c9ad9216c5b99114515489ad4a9ed9236498f84dcf6e30ea14141a7dd714b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:24:07.586094 containerd[1944]: time="2025-02-13T15:24:07.586073972Z" level=info msg="RemovePodSandbox \"d1c9ad9216c5b99114515489ad4a9ed9236498f84dcf6e30ea14141a7dd714b5\" returns successfully" Feb 13 15:24:08.455156 kubelet[2426]: E0213 15:24:08.455084 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:09.455920 kubelet[2426]: E0213 15:24:09.455834 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:10.456791 kubelet[2426]: E0213 15:24:10.456714 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:11.457753 kubelet[2426]: E0213 15:24:11.457681 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:12.458778 kubelet[2426]: E0213 15:24:12.458717 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:13.348775 systemd[1]: Created slice kubepods-besteffort-pod9aae70db_d114_40ba_9970_25b4a77b42ae.slice - libcontainer container kubepods-besteffort-pod9aae70db_d114_40ba_9970_25b4a77b42ae.slice. Feb 13 15:24:13.459841 kubelet[2426]: E0213 15:24:13.459769 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:13.474431 kubelet[2426]: I0213 15:24:13.474239 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e6d76566-2fb1-48a9-af45-7b71df4590b1\" (UniqueName: \"kubernetes.io/nfs/9aae70db-d114-40ba-9970-25b4a77b42ae-pvc-e6d76566-2fb1-48a9-af45-7b71df4590b1\") pod \"test-pod-1\" (UID: \"9aae70db-d114-40ba-9970-25b4a77b42ae\") " pod="default/test-pod-1" Feb 13 15:24:13.474431 kubelet[2426]: I0213 15:24:13.474330 2426 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8n8d\" (UniqueName: \"kubernetes.io/projected/9aae70db-d114-40ba-9970-25b4a77b42ae-kube-api-access-f8n8d\") pod \"test-pod-1\" (UID: \"9aae70db-d114-40ba-9970-25b4a77b42ae\") " pod="default/test-pod-1" Feb 13 15:24:13.610920 kernel: FS-Cache: Loaded Feb 13 15:24:13.654509 kernel: RPC: Registered named UNIX socket transport module. Feb 13 15:24:13.654645 kernel: RPC: Registered udp transport module. Feb 13 15:24:13.654691 kernel: RPC: Registered tcp transport module. Feb 13 15:24:13.655218 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 15:24:13.656261 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 15:24:13.966374 kernel: NFS: Registering the id_resolver key type Feb 13 15:24:13.966521 kernel: Key type id_resolver registered Feb 13 15:24:13.966568 kernel: Key type id_legacy registered Feb 13 15:24:14.007767 nfsidmap[4442]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 15:24:14.013922 nfsidmap[4443]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 15:24:14.255580 containerd[1944]: time="2025-02-13T15:24:14.255470953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:9aae70db-d114-40ba-9970-25b4a77b42ae,Namespace:default,Attempt:0,}" Feb 13 15:24:14.450458 (udev-worker)[4427]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:24:14.451162 systemd-networkd[1852]: cali5ec59c6bf6e: Link UP Feb 13 15:24:14.452331 systemd-networkd[1852]: cali5ec59c6bf6e: Gained carrier Feb 13 15:24:14.460209 kubelet[2426]: E0213 15:24:14.460117 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:14.473753 containerd[1944]: 2025-02-13 15:24:14.339 [INFO][4444] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.20.64-k8s-test--pod--1-eth0 default 9aae70db-d114-40ba-9970-25b4a77b42ae 1324 0 2025-02-13 15:23:42 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.20.64 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.64-k8s-test--pod--1-" Feb 13 15:24:14.473753 containerd[1944]: 2025-02-13 15:24:14.339 [INFO][4444] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.64-k8s-test--pod--1-eth0" Feb 13 15:24:14.473753 containerd[1944]: 2025-02-13 15:24:14.384 [INFO][4455] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc" HandleID="k8s-pod-network.ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc" Workload="172.31.20.64-k8s-test--pod--1-eth0" Feb 13 15:24:14.473753 containerd[1944]: 2025-02-13 15:24:14.401 [INFO][4455] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc" HandleID="k8s-pod-network.ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc" Workload="172.31.20.64-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000293f60), Attrs:map[string]string{"namespace":"default", "node":"172.31.20.64", "pod":"test-pod-1", "timestamp":"2025-02-13 15:24:14.384952045 +0000 UTC"}, Hostname:"172.31.20.64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:24:14.473753 containerd[1944]: 2025-02-13 15:24:14.401 [INFO][4455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:24:14.473753 containerd[1944]: 2025-02-13 15:24:14.401 [INFO][4455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:24:14.473753 containerd[1944]: 2025-02-13 15:24:14.401 [INFO][4455] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.20.64' Feb 13 15:24:14.473753 containerd[1944]: 2025-02-13 15:24:14.404 [INFO][4455] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc" host="172.31.20.64" Feb 13 15:24:14.473753 containerd[1944]: 2025-02-13 15:24:14.409 [INFO][4455] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.20.64" Feb 13 15:24:14.473753 containerd[1944]: 2025-02-13 15:24:14.415 [INFO][4455] ipam/ipam.go 489: Trying affinity for 192.168.20.128/26 host="172.31.20.64" Feb 13 15:24:14.473753 containerd[1944]: 2025-02-13 15:24:14.418 [INFO][4455] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.128/26 host="172.31.20.64" Feb 13 15:24:14.473753 containerd[1944]: 2025-02-13 15:24:14.422 [INFO][4455] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.128/26 host="172.31.20.64" Feb 13 15:24:14.473753 containerd[1944]: 2025-02-13 15:24:14.423 [INFO][4455] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.128/26 handle="k8s-pod-network.ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc" host="172.31.20.64" Feb 13 15:24:14.473753 containerd[1944]: 2025-02-13 15:24:14.427 [INFO][4455] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc Feb 13 15:24:14.473753 containerd[1944]: 2025-02-13 15:24:14.432 [INFO][4455] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.128/26 handle="k8s-pod-network.ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc" host="172.31.20.64" Feb 13 15:24:14.473753 containerd[1944]: 2025-02-13 15:24:14.442 [INFO][4455] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.132/26] block=192.168.20.128/26 handle="k8s-pod-network.ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc" host="172.31.20.64" Feb 13 15:24:14.473753 containerd[1944]: 2025-02-13 15:24:14.443 [INFO][4455] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.132/26] handle="k8s-pod-network.ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc" host="172.31.20.64" Feb 13 15:24:14.473753 containerd[1944]: 2025-02-13 15:24:14.443 [INFO][4455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:24:14.473753 containerd[1944]: 2025-02-13 15:24:14.443 [INFO][4455] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.132/26] IPv6=[] ContainerID="ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc" HandleID="k8s-pod-network.ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc" Workload="172.31.20.64-k8s-test--pod--1-eth0" Feb 13 15:24:14.473753 containerd[1944]: 2025-02-13 15:24:14.445 [INFO][4444] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.64-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.64-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9aae70db-d114-40ba-9970-25b4a77b42ae", ResourceVersion:"1324", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.64", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.20.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:24:14.475794 containerd[1944]: 2025-02-13 15:24:14.446 [INFO][4444] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.132/32] ContainerID="ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.64-k8s-test--pod--1-eth0" Feb 13 15:24:14.475794 containerd[1944]: 2025-02-13 15:24:14.446 [INFO][4444] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.64-k8s-test--pod--1-eth0" Feb 13 15:24:14.475794 containerd[1944]: 2025-02-13 15:24:14.452 [INFO][4444] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.64-k8s-test--pod--1-eth0" Feb 13 15:24:14.475794 containerd[1944]: 2025-02-13 15:24:14.455 [INFO][4444] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.64-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.64-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9aae70db-d114-40ba-9970-25b4a77b42ae", ResourceVersion:"1324", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.64", ContainerID:"ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.20.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"a2:d3:be:59:a7:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:24:14.475794 containerd[1944]: 2025-02-13 15:24:14.468 [INFO][4444] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.64-k8s-test--pod--1-eth0" Feb 13 15:24:14.513670 containerd[1944]: time="2025-02-13T15:24:14.512720222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:24:14.513670 containerd[1944]: time="2025-02-13T15:24:14.512817362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:24:14.513670 containerd[1944]: time="2025-02-13T15:24:14.512842370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:24:14.513670 containerd[1944]: time="2025-02-13T15:24:14.513070310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:24:14.543168 systemd[1]: Started cri-containerd-ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc.scope - libcontainer container ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc. Feb 13 15:24:14.615157 containerd[1944]: time="2025-02-13T15:24:14.614528787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:9aae70db-d114-40ba-9970-25b4a77b42ae,Namespace:default,Attempt:0,} returns sandbox id \"ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc\"" Feb 13 15:24:14.620398 containerd[1944]: time="2025-02-13T15:24:14.620220279Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 15:24:15.058857 containerd[1944]: time="2025-02-13T15:24:15.058788781Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:24:15.061740 containerd[1944]: time="2025-02-13T15:24:15.061583521Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 15:24:15.067450 containerd[1944]: time="2025-02-13T15:24:15.067206145Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 446.926334ms" Feb 13 15:24:15.067450 containerd[1944]: time="2025-02-13T15:24:15.067274245Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 15:24:15.072024 containerd[1944]: time="2025-02-13T15:24:15.071450869Z" level=info msg="CreateContainer within sandbox \"ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 15:24:15.097339 containerd[1944]: time="2025-02-13T15:24:15.097259029Z" level=info msg="CreateContainer within sandbox \"ce408ef67d68409632888cfa418f8eb3ac92f590c218d5a33bc7099aa91d8bfc\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"f0a82b67b3f9929ea6b6db658a71c003c0512d2f7ad5826d7af8d3a65dd60604\"" Feb 13 15:24:15.098466 containerd[1944]: time="2025-02-13T15:24:15.098287693Z" level=info msg="StartContainer for \"f0a82b67b3f9929ea6b6db658a71c003c0512d2f7ad5826d7af8d3a65dd60604\"" Feb 13 15:24:15.150989 systemd[1]: run-containerd-runc-k8s.io-f0a82b67b3f9929ea6b6db658a71c003c0512d2f7ad5826d7af8d3a65dd60604-runc.XXEX5L.mount: Deactivated successfully. Feb 13 15:24:15.167203 systemd[1]: Started cri-containerd-f0a82b67b3f9929ea6b6db658a71c003c0512d2f7ad5826d7af8d3a65dd60604.scope - libcontainer container f0a82b67b3f9929ea6b6db658a71c003c0512d2f7ad5826d7af8d3a65dd60604. Feb 13 15:24:15.215736 containerd[1944]: time="2025-02-13T15:24:15.215675834Z" level=info msg="StartContainer for \"f0a82b67b3f9929ea6b6db658a71c003c0512d2f7ad5826d7af8d3a65dd60604\" returns successfully" Feb 13 15:24:15.461376 kubelet[2426]: E0213 15:24:15.461184 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:15.577273 systemd-networkd[1852]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 15:24:16.462376 kubelet[2426]: E0213 15:24:16.462310 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:17.462733 kubelet[2426]: E0213 15:24:17.462665 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:18.268045 ntpd[1920]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 15:24:18.270047 ntpd[1920]: 13 Feb 15:24:18 ntpd[1920]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 15:24:18.463216 kubelet[2426]: E0213 15:24:18.463172 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:19.464383 kubelet[2426]: E0213 15:24:19.464316 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:20.464938 kubelet[2426]: E0213 15:24:20.464856 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:21.465390 kubelet[2426]: E0213 15:24:21.465330 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:22.466168 kubelet[2426]: E0213 15:24:22.466077 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:23.467232 kubelet[2426]: E0213 15:24:23.467183 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:24.468168 kubelet[2426]: E0213 15:24:24.468103 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:25.468971 kubelet[2426]: E0213 15:24:25.468901 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:26.469375 kubelet[2426]: E0213 15:24:26.469307 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:27.402493 kubelet[2426]: E0213 15:24:27.402393 2426 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:27.470541 kubelet[2426]: E0213 15:24:27.470494 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:28.471182 kubelet[2426]: E0213 15:24:28.471116 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:29.471484 kubelet[2426]: E0213 15:24:29.471430 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:30.471991 kubelet[2426]: E0213 15:24:30.471925 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:31.472806 kubelet[2426]: E0213 15:24:31.472745 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:32.473799 kubelet[2426]: E0213 15:24:32.473708 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:33.474145 kubelet[2426]: E0213 15:24:33.474082 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:34.474848 kubelet[2426]: E0213 15:24:34.474774 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:35.476066 kubelet[2426]: E0213 15:24:35.475976 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:36.476324 kubelet[2426]: E0213 15:24:36.476267 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:37.477495 kubelet[2426]: E0213 15:24:37.477389 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:38.478105 kubelet[2426]: E0213 15:24:38.478005 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:39.421411 kubelet[2426]: E0213 15:24:39.421322 2426 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.64?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 15:24:39.478996 kubelet[2426]: E0213 15:24:39.478937 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:40.479565 kubelet[2426]: E0213 15:24:40.479503 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:41.480379 kubelet[2426]: E0213 15:24:41.480291 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:42.481033 kubelet[2426]: E0213 15:24:42.480970 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:43.481243 kubelet[2426]: E0213 15:24:43.481165 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:44.482041 kubelet[2426]: E0213 15:24:44.481979 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:45.482830 kubelet[2426]: E0213 15:24:45.482764 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:46.483851 kubelet[2426]: E0213 15:24:46.483787 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:47.403240 kubelet[2426]: E0213 15:24:47.403184 2426 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:47.484909 kubelet[2426]: E0213 15:24:47.484799 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:48.485398 kubelet[2426]: E0213 15:24:48.485339 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:49.422016 kubelet[2426]: E0213 15:24:49.421917 2426 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.64?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 15:24:49.485920 kubelet[2426]: E0213 15:24:49.485838 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:50.486927 kubelet[2426]: E0213 15:24:50.486837 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:51.487648 kubelet[2426]: E0213 15:24:51.487581 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:52.488253 kubelet[2426]: E0213 15:24:52.488183 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:53.489377 kubelet[2426]: E0213 15:24:53.489315 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:54.490009 kubelet[2426]: E0213 15:24:54.489940 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:55.490533 kubelet[2426]: E0213 15:24:55.490322 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:56.491449 kubelet[2426]: E0213 15:24:56.491390 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:57.492480 kubelet[2426]: E0213 15:24:57.492400 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:58.493188 kubelet[2426]: E0213 15:24:58.493115 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:59.423332 kubelet[2426]: E0213 15:24:59.423036 2426 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.64?timeout=10s\": context deadline exceeded" Feb 13 15:24:59.494289 kubelet[2426]: E0213 15:24:59.494234 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:00.495380 kubelet[2426]: E0213 15:25:00.495318 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:01.495649 kubelet[2426]: E0213 15:25:01.495575 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:02.168034 kubelet[2426]: E0213 15:25:02.167956 2426 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.64?timeout=10s\": unexpected EOF" Feb 13 15:25:02.170904 kubelet[2426]: E0213 15:25:02.170297 2426 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.23:6443/api/v1/namespaces/calico-system/events\": unexpected EOF" event=< Feb 13 15:25:02.170904 kubelet[2426]: &Event{ObjectMeta:{calico-node-m2bff.1823cdec3a81da4a calico-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:calico-node-m2bff,UID:cb51a8f1-f230-4817-aac4-35ea2be675d3,APIVersion:v1,ResourceVersion:968,FieldPath:spec.containers{calico-node},},Reason:Unhealthy,Message:Readiness probe failed: 2025-02-13 15:24:55.566 [INFO][334] node/health.go 202: Number of node(s) with BGP peering established = 0 Feb 13 15:25:02.170904 kubelet[2426]: calico/node is not ready: BIRD is not ready: BGP not established with 172.31.28.23 Feb 13 15:25:02.170904 kubelet[2426]: ,Source:EventSource{Component:kubelet,Host:172.31.20.64,},FirstTimestamp:2025-02-13 15:24:55.572707914 +0000 UTC m=+110.756618675,LastTimestamp:2025-02-13 15:24:55.572707914 +0000 UTC m=+110.756618675,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.20.64,} Feb 13 15:25:02.170904 kubelet[2426]: > Feb 13 15:25:02.175926 kubelet[2426]: E0213 15:25:02.175814 2426 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.64?timeout=10s\": read tcp 172.31.20.64:47662->172.31.28.23:6443: read: connection reset by peer" Feb 13 15:25:02.175926 kubelet[2426]: I0213 15:25:02.175918 2426 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 13 15:25:02.177460 kubelet[2426]: E0213 15:25:02.177394 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.64?timeout=10s\": dial tcp 172.31.28.23:6443: connect: connection refused" interval="200ms" Feb 13 15:25:02.379147 kubelet[2426]: E0213 15:25:02.379077 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.64?timeout=10s\": dial tcp 172.31.28.23:6443: connect: connection refused" interval="400ms" Feb 13 15:25:02.496109 kubelet[2426]: E0213 15:25:02.495927 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:02.781054 kubelet[2426]: E0213 15:25:02.780851 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.64?timeout=10s\": dial tcp 172.31.28.23:6443: connect: connection refused" interval="800ms" Feb 13 15:25:03.496963 kubelet[2426]: E0213 15:25:03.496851 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:04.497961 kubelet[2426]: E0213 15:25:04.497901 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:05.498606 kubelet[2426]: E0213 15:25:05.498533 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:06.499627 kubelet[2426]: E0213 15:25:06.499561 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:07.403108 kubelet[2426]: E0213 15:25:07.403007 2426 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:07.500330 kubelet[2426]: E0213 15:25:07.500269 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:08.500860 kubelet[2426]: E0213 15:25:08.500791 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:09.501204 kubelet[2426]: E0213 15:25:09.501138 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:10.501810 kubelet[2426]: E0213 15:25:10.501749 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:11.502004 kubelet[2426]: E0213 15:25:11.501942 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:12.502313 kubelet[2426]: E0213 15:25:12.502251 2426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"