Jan 13 20:09:37.162488 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 13 20:09:37.162533 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:56:28 -00 2025 Jan 13 20:09:37.162557 kernel: KASLR disabled due to lack of seed Jan 13 20:09:37.162573 kernel: efi: EFI v2.7 by EDK II Jan 13 20:09:37.162589 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Jan 13 20:09:37.162604 kernel: secureboot: Secure boot disabled Jan 13 20:09:37.162621 kernel: ACPI: Early table checksum verification disabled Jan 13 20:09:37.162636 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 13 20:09:37.162651 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 13 20:09:37.162666 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 20:09:37.162705 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 13 20:09:37.162723 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 20:09:37.162739 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 13 20:09:37.162755 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 13 20:09:37.162773 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 13 20:09:37.162795 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 20:09:37.162812 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 13 20:09:37.162828 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 13 20:09:37.162844 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 13 20:09:37.162860 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 13 20:09:37.162876 kernel: printk: bootconsole [uart0] enabled Jan 13 20:09:37.162892 kernel: NUMA: Failed to initialise from firmware Jan 13 20:09:37.162909 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 20:09:37.162925 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 13 20:09:37.162941 kernel: Zone ranges: Jan 13 20:09:37.162957 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 20:09:37.162977 kernel: DMA32 empty Jan 13 20:09:37.162994 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 13 20:09:37.163009 kernel: Movable zone start for each node Jan 13 20:09:37.163025 kernel: Early memory node ranges Jan 13 20:09:37.163041 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 13 20:09:37.163057 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 13 20:09:37.163073 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 13 20:09:37.163089 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 13 20:09:37.163104 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 13 20:09:37.163120 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 13 20:09:37.163136 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 13 20:09:37.163152 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 13 20:09:37.163173 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 20:09:37.163190 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 13 20:09:37.164249 kernel: psci: probing for conduit method from ACPI. Jan 13 20:09:37.164278 kernel: psci: PSCIv1.0 detected in firmware. Jan 13 20:09:37.164297 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:09:37.164322 kernel: psci: Trusted OS migration not required Jan 13 20:09:37.164339 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:09:37.164356 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:09:37.164374 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:09:37.164391 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 20:09:37.164409 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:09:37.164426 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:09:37.164443 kernel: CPU features: detected: Spectre-v2 Jan 13 20:09:37.164460 kernel: CPU features: detected: Spectre-v3a Jan 13 20:09:37.164477 kernel: CPU features: detected: Spectre-BHB Jan 13 20:09:37.164494 kernel: CPU features: detected: ARM erratum 1742098 Jan 13 20:09:37.164511 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 13 20:09:37.164533 kernel: alternatives: applying boot alternatives Jan 13 20:09:37.164552 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc Jan 13 20:09:37.164572 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:09:37.164589 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:09:37.164606 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:09:37.164623 kernel: Fallback order for Node 0: 0 Jan 13 20:09:37.164641 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 13 20:09:37.164658 kernel: Policy zone: Normal Jan 13 20:09:37.164674 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:09:37.164691 kernel: software IO TLB: area num 2. Jan 13 20:09:37.164713 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 13 20:09:37.164732 kernel: Memory: 3819640K/4030464K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 210824K reserved, 0K cma-reserved) Jan 13 20:09:37.164749 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:09:37.164766 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:09:37.164785 kernel: rcu: RCU event tracing is enabled. Jan 13 20:09:37.164803 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:09:37.164820 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:09:37.164838 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:09:37.164855 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:09:37.164873 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:09:37.164890 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:09:37.164911 kernel: GICv3: 96 SPIs implemented Jan 13 20:09:37.164928 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:09:37.164945 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:09:37.164962 kernel: GICv3: GICv3 features: 16 PPIs Jan 13 20:09:37.164979 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 13 20:09:37.164996 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 13 20:09:37.165013 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:09:37.165031 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:09:37.165048 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 13 20:09:37.165065 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 13 20:09:37.165082 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 13 20:09:37.165099 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:09:37.165121 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 13 20:09:37.165138 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 13 20:09:37.165157 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 13 20:09:37.165174 kernel: Console: colour dummy device 80x25 Jan 13 20:09:37.165191 kernel: printk: console [tty1] enabled Jan 13 20:09:37.165226 kernel: ACPI: Core revision 20230628 Jan 13 20:09:37.165250 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 13 20:09:37.165269 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:09:37.165287 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:09:37.167298 kernel: landlock: Up and running. Jan 13 20:09:37.167344 kernel: SELinux: Initializing. Jan 13 20:09:37.167363 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:09:37.167381 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:09:37.167399 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:09:37.167417 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:09:37.167435 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:09:37.167453 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:09:37.167471 kernel: Platform MSI: ITS@0x10080000 domain created Jan 13 20:09:37.167493 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 13 20:09:37.167512 kernel: Remapping and enabling EFI services. Jan 13 20:09:37.167529 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:09:37.167547 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:09:37.167567 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 13 20:09:37.167586 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 13 20:09:37.167604 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 13 20:09:37.167621 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:09:37.167638 kernel: SMP: Total of 2 processors activated. Jan 13 20:09:37.167656 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:09:37.167680 kernel: CPU features: detected: 32-bit EL1 Support Jan 13 20:09:37.167698 kernel: CPU features: detected: CRC32 instructions Jan 13 20:09:37.167728 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:09:37.167754 kernel: alternatives: applying system-wide alternatives Jan 13 20:09:37.167774 kernel: devtmpfs: initialized Jan 13 20:09:37.167794 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:09:37.167812 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:09:37.167830 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:09:37.167848 kernel: SMBIOS 3.0.0 present. Jan 13 20:09:37.167872 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 13 20:09:37.167890 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:09:37.167908 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:09:37.167927 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:09:37.167946 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:09:37.167964 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:09:37.167982 kernel: audit: type=2000 audit(0.220:1): state=initialized audit_enabled=0 res=1 Jan 13 20:09:37.168005 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:09:37.168023 kernel: cpuidle: using governor menu Jan 13 20:09:37.168042 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:09:37.168060 kernel: ASID allocator initialised with 65536 entries Jan 13 20:09:37.168078 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:09:37.168096 kernel: Serial: AMBA PL011 UART driver Jan 13 20:09:37.168114 kernel: Modules: 17360 pages in range for non-PLT usage Jan 13 20:09:37.168132 kernel: Modules: 508880 pages in range for PLT usage Jan 13 20:09:37.168151 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:09:37.168174 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:09:37.168192 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:09:37.168257 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:09:37.168282 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:09:37.168301 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:09:37.168319 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:09:37.168337 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:09:37.168355 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:09:37.168373 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:09:37.168398 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:09:37.168417 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:09:37.168435 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:09:37.168453 kernel: ACPI: Interpreter enabled Jan 13 20:09:37.168471 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:09:37.168489 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:09:37.168508 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 13 20:09:37.168841 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:09:37.169058 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:09:37.169322 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:09:37.169528 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 13 20:09:37.169729 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 13 20:09:37.169754 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 13 20:09:37.169773 kernel: acpiphp: Slot [1] registered Jan 13 20:09:37.169791 kernel: acpiphp: Slot [2] registered Jan 13 20:09:37.169809 kernel: acpiphp: Slot [3] registered Jan 13 20:09:37.169834 kernel: acpiphp: Slot [4] registered Jan 13 20:09:37.169853 kernel: acpiphp: Slot [5] registered Jan 13 20:09:37.169871 kernel: acpiphp: Slot [6] registered Jan 13 20:09:37.169889 kernel: acpiphp: Slot [7] registered Jan 13 20:09:37.169906 kernel: acpiphp: Slot [8] registered Jan 13 20:09:37.169924 kernel: acpiphp: Slot [9] registered Jan 13 20:09:37.169942 kernel: acpiphp: Slot [10] registered Jan 13 20:09:37.169959 kernel: acpiphp: Slot [11] registered Jan 13 20:09:37.169977 kernel: acpiphp: Slot [12] registered Jan 13 20:09:37.169995 kernel: acpiphp: Slot [13] registered Jan 13 20:09:37.170018 kernel: acpiphp: Slot [14] registered Jan 13 20:09:37.170036 kernel: acpiphp: Slot [15] registered Jan 13 20:09:37.170054 kernel: acpiphp: Slot [16] registered Jan 13 20:09:37.170072 kernel: acpiphp: Slot [17] registered Jan 13 20:09:37.170090 kernel: acpiphp: Slot [18] registered Jan 13 20:09:37.170108 kernel: acpiphp: Slot [19] registered Jan 13 20:09:37.170125 kernel: acpiphp: Slot [20] registered Jan 13 20:09:37.170143 kernel: acpiphp: Slot [21] registered Jan 13 20:09:37.170161 kernel: acpiphp: Slot [22] registered Jan 13 20:09:37.170184 kernel: acpiphp: Slot [23] registered Jan 13 20:09:37.170202 kernel: acpiphp: Slot [24] registered Jan 13 20:09:37.171414 kernel: acpiphp: Slot [25] registered Jan 13 20:09:37.171437 kernel: acpiphp: Slot [26] registered Jan 13 20:09:37.171455 kernel: acpiphp: Slot [27] registered Jan 13 20:09:37.171473 kernel: acpiphp: Slot [28] registered Jan 13 20:09:37.171491 kernel: acpiphp: Slot [29] registered Jan 13 20:09:37.171509 kernel: acpiphp: Slot [30] registered Jan 13 20:09:37.171527 kernel: acpiphp: Slot [31] registered Jan 13 20:09:37.171545 kernel: PCI host bridge to bus 0000:00 Jan 13 20:09:37.171806 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 13 20:09:37.171985 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:09:37.172161 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 13 20:09:37.172391 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 13 20:09:37.172635 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 13 20:09:37.172850 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 13 20:09:37.173071 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 13 20:09:37.174387 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 20:09:37.174620 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 13 20:09:37.174858 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:09:37.175082 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 20:09:37.175328 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 13 20:09:37.175539 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 13 20:09:37.175751 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 13 20:09:37.175954 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:09:37.176156 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 13 20:09:37.180925 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 13 20:09:37.181156 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 13 20:09:37.181945 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 13 20:09:37.182154 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 13 20:09:37.182394 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 13 20:09:37.182575 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:09:37.182776 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 13 20:09:37.182802 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:09:37.182822 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:09:37.182841 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:09:37.182860 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:09:37.182878 kernel: iommu: Default domain type: Translated Jan 13 20:09:37.182903 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:09:37.182921 kernel: efivars: Registered efivars operations Jan 13 20:09:37.182939 kernel: vgaarb: loaded Jan 13 20:09:37.182957 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:09:37.182975 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:09:37.182993 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:09:37.183011 kernel: pnp: PnP ACPI init Jan 13 20:09:37.183247 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 13 20:09:37.183281 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:09:37.183301 kernel: NET: Registered PF_INET protocol family Jan 13 20:09:37.183319 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:09:37.183338 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:09:37.183356 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:09:37.183374 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:09:37.183392 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:09:37.183411 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:09:37.183429 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:09:37.183452 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:09:37.183471 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:09:37.183489 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:09:37.183506 kernel: kvm [1]: HYP mode not available Jan 13 20:09:37.183524 kernel: Initialise system trusted keyrings Jan 13 20:09:37.183543 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:09:37.183561 kernel: Key type asymmetric registered Jan 13 20:09:37.183578 kernel: Asymmetric key parser 'x509' registered Jan 13 20:09:37.183596 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:09:37.183618 kernel: io scheduler mq-deadline registered Jan 13 20:09:37.183637 kernel: io scheduler kyber registered Jan 13 20:09:37.183655 kernel: io scheduler bfq registered Jan 13 20:09:37.183882 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 13 20:09:37.183908 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:09:37.183927 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:09:37.183946 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 13 20:09:37.183964 kernel: ACPI: button: Sleep Button [SLPB] Jan 13 20:09:37.183987 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:09:37.184007 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 20:09:37.184247 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 13 20:09:37.184274 kernel: printk: console [ttyS0] disabled Jan 13 20:09:37.184293 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 13 20:09:37.184312 kernel: printk: console [ttyS0] enabled Jan 13 20:09:37.184330 kernel: printk: bootconsole [uart0] disabled Jan 13 20:09:37.184348 kernel: thunder_xcv, ver 1.0 Jan 13 20:09:37.184365 kernel: thunder_bgx, ver 1.0 Jan 13 20:09:37.184383 kernel: nicpf, ver 1.0 Jan 13 20:09:37.184408 kernel: nicvf, ver 1.0 Jan 13 20:09:37.184622 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:09:37.184816 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:09:36 UTC (1736798976) Jan 13 20:09:37.184843 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:09:37.184864 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 13 20:09:37.184884 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:09:37.184903 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:09:37.184929 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:09:37.184948 kernel: Segment Routing with IPv6 Jan 13 20:09:37.184967 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:09:37.184985 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:09:37.185004 kernel: Key type dns_resolver registered Jan 13 20:09:37.185022 kernel: registered taskstats version 1 Jan 13 20:09:37.185041 kernel: Loading compiled-in X.509 certificates Jan 13 20:09:37.185060 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 46cb4d1b22f3a5974766fe7d7b651e2f296d4fe0' Jan 13 20:09:37.185079 kernel: Key type .fscrypt registered Jan 13 20:09:37.185097 kernel: Key type fscrypt-provisioning registered Jan 13 20:09:37.185121 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:09:37.185140 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:09:37.185159 kernel: ima: No architecture policies found Jan 13 20:09:37.185178 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:09:37.185196 kernel: clk: Disabling unused clocks Jan 13 20:09:37.188274 kernel: Freeing unused kernel memory: 39936K Jan 13 20:09:37.188306 kernel: Run /init as init process Jan 13 20:09:37.188325 kernel: with arguments: Jan 13 20:09:37.188343 kernel: /init Jan 13 20:09:37.188371 kernel: with environment: Jan 13 20:09:37.188389 kernel: HOME=/ Jan 13 20:09:37.188407 kernel: TERM=linux Jan 13 20:09:37.188425 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:09:37.188448 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:09:37.188472 systemd[1]: Detected virtualization amazon. Jan 13 20:09:37.188492 systemd[1]: Detected architecture arm64. Jan 13 20:09:37.188515 systemd[1]: Running in initrd. Jan 13 20:09:37.188535 systemd[1]: No hostname configured, using default hostname. Jan 13 20:09:37.188554 systemd[1]: Hostname set to . Jan 13 20:09:37.188574 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:09:37.188594 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:09:37.188613 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:09:37.188633 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:09:37.188654 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:09:37.188679 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:09:37.188700 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:09:37.188720 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:09:37.188742 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:09:37.188763 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:09:37.188782 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:09:37.188802 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:09:37.188826 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:09:37.188846 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:09:37.188865 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:09:37.188885 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:09:37.188905 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:09:37.188924 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:09:37.188944 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:09:37.188964 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:09:37.188983 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:09:37.189007 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:09:37.189027 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:09:37.189047 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:09:37.189066 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:09:37.189086 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:09:37.189106 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:09:37.189125 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:09:37.189145 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:09:37.189169 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:09:37.189189 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:09:37.189224 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:09:37.189250 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:09:37.189318 systemd-journald[252]: Collecting audit messages is disabled. Jan 13 20:09:37.189368 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:09:37.189391 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:09:37.189411 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:09:37.189430 systemd-journald[252]: Journal started Jan 13 20:09:37.189480 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2738e24efa74d38e3a44786666d9bd) is 8.0M, max 75.3M, 67.3M free. Jan 13 20:09:37.157551 systemd-modules-load[253]: Inserted module 'overlay' Jan 13 20:09:37.200447 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:09:37.200491 kernel: Bridge firewalling registered Jan 13 20:09:37.195825 systemd-modules-load[253]: Inserted module 'br_netfilter' Jan 13 20:09:37.207630 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:09:37.211890 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:09:37.219259 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:09:37.237766 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:09:37.248040 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:09:37.258521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:09:37.265616 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:09:37.291074 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:09:37.312537 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:09:37.315661 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:09:37.332693 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:09:37.341853 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:09:37.352951 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:09:37.394822 dracut-cmdline[293]: dracut-dracut-053 Jan 13 20:09:37.407146 dracut-cmdline[293]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc Jan 13 20:09:37.409007 systemd-resolved[291]: Positive Trust Anchors: Jan 13 20:09:37.409028 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:09:37.409089 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:09:37.584233 kernel: SCSI subsystem initialized Jan 13 20:09:37.590248 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:09:37.602238 kernel: iscsi: registered transport (tcp) Jan 13 20:09:37.624542 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:09:37.624616 kernel: QLogic iSCSI HBA Driver Jan 13 20:09:37.686246 kernel: random: crng init done Jan 13 20:09:37.686570 systemd-resolved[291]: Defaulting to hostname 'linux'. Jan 13 20:09:37.688312 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:09:37.694638 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:09:37.720462 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:09:37.731529 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:09:37.768012 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:09:37.768087 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:09:37.768114 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:09:37.834253 kernel: raid6: neonx8 gen() 6539 MB/s Jan 13 20:09:37.851241 kernel: raid6: neonx4 gen() 6534 MB/s Jan 13 20:09:37.868241 kernel: raid6: neonx2 gen() 5408 MB/s Jan 13 20:09:37.885244 kernel: raid6: neonx1 gen() 3946 MB/s Jan 13 20:09:37.902241 kernel: raid6: int64x8 gen() 3603 MB/s Jan 13 20:09:37.919242 kernel: raid6: int64x4 gen() 3679 MB/s Jan 13 20:09:37.936242 kernel: raid6: int64x2 gen() 3550 MB/s Jan 13 20:09:37.953994 kernel: raid6: int64x1 gen() 2764 MB/s Jan 13 20:09:37.954031 kernel: raid6: using algorithm neonx8 gen() 6539 MB/s Jan 13 20:09:37.971987 kernel: raid6: .... xor() 4742 MB/s, rmw enabled Jan 13 20:09:37.972026 kernel: raid6: using neon recovery algorithm Jan 13 20:09:37.980032 kernel: xor: measuring software checksum speed Jan 13 20:09:37.980081 kernel: 8regs : 12926 MB/sec Jan 13 20:09:37.981244 kernel: 32regs : 12074 MB/sec Jan 13 20:09:37.983203 kernel: arm64_neon : 8973 MB/sec Jan 13 20:09:37.983259 kernel: xor: using function: 8regs (12926 MB/sec) Jan 13 20:09:38.065254 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:09:38.083719 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:09:38.108593 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:09:38.153383 systemd-udevd[474]: Using default interface naming scheme 'v255'. Jan 13 20:09:38.161111 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:09:38.181547 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:09:38.205554 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Jan 13 20:09:38.259371 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:09:38.274512 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:09:38.386280 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:09:38.402734 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:09:38.458071 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:09:38.468127 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:09:38.481343 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:09:38.486332 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:09:38.505562 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:09:38.530122 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:09:38.557544 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:09:38.557605 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 13 20:09:38.574359 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 20:09:38.574605 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 20:09:38.574857 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:6e:7b:29:99:31 Jan 13 20:09:38.584588 (udev-worker)[538]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:09:38.598419 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:09:38.598649 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:09:38.603899 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:09:38.610338 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:09:38.610712 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:09:38.616279 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:09:38.634757 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:09:38.657068 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 20:09:38.657130 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 20:09:38.663276 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:09:38.676294 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 20:09:38.677518 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:09:38.690327 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:09:38.690400 kernel: GPT:9289727 != 16777215 Jan 13 20:09:38.690425 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:09:38.690449 kernel: GPT:9289727 != 16777215 Jan 13 20:09:38.691903 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:09:38.691936 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:09:38.724758 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:09:38.787272 kernel: BTRFS: device fsid 2be7cc1c-29d4-4496-b29b-8561323213d2 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (520) Jan 13 20:09:38.814075 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 20:09:38.828250 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (519) Jan 13 20:09:38.899532 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 20:09:38.921718 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 20:09:38.926708 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 20:09:38.948457 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:09:38.962556 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:09:38.980032 disk-uuid[664]: Primary Header is updated. Jan 13 20:09:38.980032 disk-uuid[664]: Secondary Entries is updated. Jan 13 20:09:38.980032 disk-uuid[664]: Secondary Header is updated. Jan 13 20:09:38.991248 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:09:39.011258 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:09:40.014398 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:09:40.014464 disk-uuid[665]: The operation has completed successfully. Jan 13 20:09:40.192278 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:09:40.192480 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:09:40.257462 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:09:40.268388 sh[923]: Success Jan 13 20:09:40.293274 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:09:40.410117 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:09:40.430532 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:09:40.437960 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:09:40.467656 kernel: BTRFS info (device dm-0): first mount of filesystem 2be7cc1c-29d4-4496-b29b-8561323213d2 Jan 13 20:09:40.467716 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:09:40.469495 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:09:40.470801 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:09:40.470836 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:09:40.500254 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:09:40.504229 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:09:40.509007 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:09:40.520447 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:09:40.531157 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:09:40.557007 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:09:40.557080 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:09:40.558535 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:09:40.566259 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:09:40.585069 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:09:40.589331 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:09:40.597346 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:09:40.608551 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:09:40.727308 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:09:40.740587 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:09:40.791790 systemd-networkd[1116]: lo: Link UP Jan 13 20:09:40.791812 systemd-networkd[1116]: lo: Gained carrier Jan 13 20:09:40.794400 systemd-networkd[1116]: Enumeration completed Jan 13 20:09:40.795127 systemd-networkd[1116]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:09:40.795134 systemd-networkd[1116]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:09:40.796513 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:09:40.803901 systemd[1]: Reached target network.target - Network. Jan 13 20:09:40.809271 systemd-networkd[1116]: eth0: Link UP Jan 13 20:09:40.809279 systemd-networkd[1116]: eth0: Gained carrier Jan 13 20:09:40.809296 systemd-networkd[1116]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:09:40.854349 systemd-networkd[1116]: eth0: DHCPv4 address 172.31.26.215/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:09:40.898012 ignition[1020]: Ignition 2.20.0 Jan 13 20:09:40.898033 ignition[1020]: Stage: fetch-offline Jan 13 20:09:40.898956 ignition[1020]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:09:40.898980 ignition[1020]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:09:40.899636 ignition[1020]: Ignition finished successfully Jan 13 20:09:40.917539 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:09:40.936610 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:09:40.960488 ignition[1124]: Ignition 2.20.0 Jan 13 20:09:40.960965 ignition[1124]: Stage: fetch Jan 13 20:09:40.961575 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:09:40.961599 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:09:40.961786 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:09:40.973477 ignition[1124]: PUT result: OK Jan 13 20:09:40.980319 ignition[1124]: parsed url from cmdline: "" Jan 13 20:09:40.980457 ignition[1124]: no config URL provided Jan 13 20:09:40.980476 ignition[1124]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:09:40.980501 ignition[1124]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:09:40.980532 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:09:40.989536 ignition[1124]: PUT result: OK Jan 13 20:09:40.989626 ignition[1124]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 20:09:40.994291 ignition[1124]: GET result: OK Jan 13 20:09:40.994498 ignition[1124]: parsing config with SHA512: 9ad50b581a77f17de85cc311d2130e46616fcde7b13427e2caff1dfbc13c1f0fc2b30dee17468756fe0480a52a63cce11c2bb76e839b87ccd1f5a0a30bcc6683 Jan 13 20:09:40.999800 unknown[1124]: fetched base config from "system" Jan 13 20:09:41.000240 ignition[1124]: fetch: fetch complete Jan 13 20:09:40.999816 unknown[1124]: fetched base config from "system" Jan 13 20:09:41.000252 ignition[1124]: fetch: fetch passed Jan 13 20:09:40.999830 unknown[1124]: fetched user config from "aws" Jan 13 20:09:41.000330 ignition[1124]: Ignition finished successfully Jan 13 20:09:41.015251 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:09:41.026507 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:09:41.058850 ignition[1130]: Ignition 2.20.0 Jan 13 20:09:41.058878 ignition[1130]: Stage: kargs Jan 13 20:09:41.059569 ignition[1130]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:09:41.059594 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:09:41.059762 ignition[1130]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:09:41.065437 ignition[1130]: PUT result: OK Jan 13 20:09:41.075058 ignition[1130]: kargs: kargs passed Jan 13 20:09:41.075236 ignition[1130]: Ignition finished successfully Jan 13 20:09:41.080414 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:09:41.091493 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:09:41.119977 ignition[1136]: Ignition 2.20.0 Jan 13 20:09:41.119998 ignition[1136]: Stage: disks Jan 13 20:09:41.120586 ignition[1136]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:09:41.120617 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:09:41.120763 ignition[1136]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:09:41.124640 ignition[1136]: PUT result: OK Jan 13 20:09:41.136900 ignition[1136]: disks: disks passed Jan 13 20:09:41.136989 ignition[1136]: Ignition finished successfully Jan 13 20:09:41.140269 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:09:41.146179 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:09:41.150391 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:09:41.153318 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:09:41.155733 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:09:41.158191 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:09:41.182583 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:09:41.237116 systemd-fsck[1144]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:09:41.243549 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:09:41.258529 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:09:41.340257 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f9a95e53-2d63-4443-b523-cb2108fb48f6 r/w with ordered data mode. Quota mode: none. Jan 13 20:09:41.341805 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:09:41.346640 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:09:41.367375 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:09:41.374011 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:09:41.383830 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:09:41.384103 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:09:41.384154 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:09:41.419247 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1163) Jan 13 20:09:41.423589 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:09:41.423640 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:09:41.423666 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:09:41.430921 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:09:41.439632 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:09:41.450467 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:09:41.453161 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:09:41.562875 initrd-setup-root[1187]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:09:41.573258 initrd-setup-root[1194]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:09:41.583179 initrd-setup-root[1201]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:09:41.592289 initrd-setup-root[1208]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:09:41.785702 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:09:41.796883 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:09:41.802514 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:09:41.824510 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:09:41.829741 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:09:41.872241 ignition[1276]: INFO : Ignition 2.20.0 Jan 13 20:09:41.872241 ignition[1276]: INFO : Stage: mount Jan 13 20:09:41.872241 ignition[1276]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:09:41.872241 ignition[1276]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:09:41.883885 ignition[1276]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:09:41.883885 ignition[1276]: INFO : PUT result: OK Jan 13 20:09:41.893768 ignition[1276]: INFO : mount: mount passed Jan 13 20:09:41.897410 ignition[1276]: INFO : Ignition finished successfully Jan 13 20:09:41.898306 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:09:41.914359 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:09:41.922444 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:09:41.944616 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:09:41.970232 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1288) Jan 13 20:09:41.975169 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:09:41.975252 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:09:41.975281 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:09:41.980228 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:09:41.984461 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:09:42.016845 ignition[1305]: INFO : Ignition 2.20.0 Jan 13 20:09:42.016845 ignition[1305]: INFO : Stage: files Jan 13 20:09:42.022132 ignition[1305]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:09:42.022132 ignition[1305]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:09:42.022132 ignition[1305]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:09:42.022132 ignition[1305]: INFO : PUT result: OK Jan 13 20:09:42.036335 ignition[1305]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:09:42.057288 ignition[1305]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:09:42.060450 ignition[1305]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:09:42.076459 ignition[1305]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:09:42.080109 ignition[1305]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:09:42.085107 unknown[1305]: wrote ssh authorized keys file for user: core Jan 13 20:09:42.088177 ignition[1305]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:09:42.093953 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:09:42.093953 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:09:42.093953 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:09:42.093953 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:09:42.093953 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:09:42.093953 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:09:42.093953 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:09:42.093953 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 13 20:09:42.479616 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 20:09:42.736369 systemd-networkd[1116]: eth0: Gained IPv6LL Jan 13 20:09:42.846334 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:09:42.856142 ignition[1305]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:09:42.856142 ignition[1305]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:09:42.856142 ignition[1305]: INFO : files: files passed Jan 13 20:09:42.856142 ignition[1305]: INFO : Ignition finished successfully Jan 13 20:09:42.856729 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:09:42.879579 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:09:42.887724 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:09:42.899128 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:09:42.899396 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:09:42.928387 initrd-setup-root-after-ignition[1333]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:09:42.928387 initrd-setup-root-after-ignition[1333]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:09:42.935398 initrd-setup-root-after-ignition[1337]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:09:42.942147 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:09:42.942871 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:09:42.955802 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:09:43.008634 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:09:43.008908 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:09:43.019347 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:09:43.022362 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:09:43.029599 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:09:43.043565 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:09:43.071777 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:09:43.082619 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:09:43.108885 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:09:43.115356 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:09:43.118390 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:09:43.123632 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:09:43.123863 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:09:43.126678 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:09:43.130740 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:09:43.139407 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:09:43.142241 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:09:43.151921 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:09:43.154752 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:09:43.157414 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:09:43.167700 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:09:43.170509 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:09:43.177242 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:09:43.179315 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:09:43.179532 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:09:43.182581 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:09:43.193733 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:09:43.196738 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:09:43.201519 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:09:43.204099 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:09:43.204338 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:09:43.213248 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:09:43.213478 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:09:43.216604 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:09:43.216799 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:09:43.234663 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:09:43.252589 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:09:43.270811 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:09:43.273494 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:09:43.285273 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:09:43.285502 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:09:43.300733 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:09:43.306819 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:09:43.308967 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:09:43.319112 ignition[1357]: INFO : Ignition 2.20.0 Jan 13 20:09:43.319112 ignition[1357]: INFO : Stage: umount Jan 13 20:09:43.319112 ignition[1357]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:09:43.319112 ignition[1357]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:09:43.319112 ignition[1357]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:09:43.321869 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:09:43.338269 ignition[1357]: INFO : PUT result: OK Jan 13 20:09:43.338269 ignition[1357]: INFO : umount: umount passed Jan 13 20:09:43.338269 ignition[1357]: INFO : Ignition finished successfully Jan 13 20:09:43.322044 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:09:43.342616 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:09:43.342883 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:09:43.347755 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:09:43.347859 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:09:43.351881 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:09:43.351970 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:09:43.367104 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:09:43.367189 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:09:43.369641 systemd[1]: Stopped target network.target - Network. Jan 13 20:09:43.371752 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:09:43.371832 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:09:43.374685 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:09:43.376691 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:09:43.392584 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:09:43.395520 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:09:43.397646 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:09:43.399915 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:09:43.399989 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:09:43.402805 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:09:43.402873 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:09:43.420584 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:09:43.420675 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:09:43.423058 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:09:43.423134 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:09:43.425612 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:09:43.425686 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:09:43.428437 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:09:43.432323 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:09:43.460399 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:09:43.460616 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:09:43.464173 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:09:43.464321 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:09:43.467902 systemd-networkd[1116]: eth0: DHCPv6 lease lost Jan 13 20:09:43.481567 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:09:43.481950 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:09:43.488846 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:09:43.488973 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:09:43.501563 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:09:43.507552 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:09:43.507665 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:09:43.510704 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:09:43.510785 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:09:43.513319 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:09:43.513394 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:09:43.516138 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:09:43.543937 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:09:43.545541 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:09:43.560421 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:09:43.560911 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:09:43.572120 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:09:43.572225 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:09:43.581114 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:09:43.581185 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:09:43.583586 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:09:43.583667 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:09:43.586031 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:09:43.586107 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:09:43.600931 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:09:43.601021 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:09:43.615487 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:09:43.618685 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:09:43.618796 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:09:43.621917 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:09:43.621999 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:09:43.625132 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:09:43.625231 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:09:43.628555 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:09:43.628632 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:09:43.668301 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:09:43.669921 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:09:43.677744 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:09:43.692457 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:09:43.707711 systemd[1]: Switching root. Jan 13 20:09:43.750567 systemd-journald[252]: Journal stopped Jan 13 20:09:45.731051 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Jan 13 20:09:45.731178 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:09:45.732150 kernel: SELinux: policy capability open_perms=1 Jan 13 20:09:45.732202 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:09:45.732277 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:09:45.732322 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:09:45.732360 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:09:45.732394 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:09:45.732427 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:09:45.732468 kernel: audit: type=1403 audit(1736798984.105:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:09:45.732512 systemd[1]: Successfully loaded SELinux policy in 50.677ms. Jan 13 20:09:45.732561 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.587ms. Jan 13 20:09:45.732598 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:09:45.732630 systemd[1]: Detected virtualization amazon. Jan 13 20:09:45.732662 systemd[1]: Detected architecture arm64. Jan 13 20:09:45.732696 systemd[1]: Detected first boot. Jan 13 20:09:45.732729 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:09:45.732765 zram_generator::config[1400]: No configuration found. Jan 13 20:09:45.732800 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:09:45.732832 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:09:45.732867 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:09:45.732899 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:09:45.732928 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:09:45.732961 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:09:45.732992 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:09:45.733025 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:09:45.733056 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:09:45.733087 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:09:45.733126 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:09:45.733155 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:09:45.733186 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:09:45.733269 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:09:45.733306 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:09:45.733335 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:09:45.733367 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:09:45.737672 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:09:45.737722 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:09:45.737760 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:09:45.737791 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:09:45.737823 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:09:45.737854 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:09:45.737882 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:09:45.737916 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:09:45.737949 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:09:45.737980 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:09:45.738016 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:09:45.738046 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:09:45.738075 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:09:45.738103 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:09:45.738131 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:09:45.738161 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:09:45.738190 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:09:45.740281 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:09:45.740334 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:09:45.740375 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:09:45.740405 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:09:45.740434 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:09:45.740464 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:09:45.740493 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:09:45.740524 systemd[1]: Reached target machines.target - Containers. Jan 13 20:09:45.740554 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:09:45.740583 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:09:45.740616 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:09:45.740645 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:09:45.740674 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:09:45.740707 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:09:45.740739 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:09:45.740769 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:09:45.740798 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:09:45.740826 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:09:45.740855 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:09:45.740889 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:09:45.740918 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:09:45.740946 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:09:45.740974 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:09:45.741003 kernel: loop: module loaded Jan 13 20:09:45.741031 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:09:45.741058 kernel: fuse: init (API version 7.39) Jan 13 20:09:45.741086 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:09:45.741117 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:09:45.741150 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:09:45.741180 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:09:45.741227 systemd[1]: Stopped verity-setup.service. Jan 13 20:09:45.741262 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:09:45.741291 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:09:45.741325 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:09:45.741354 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:09:45.741383 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:09:45.741414 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:09:45.741449 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:09:45.741478 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:09:45.741507 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:09:45.741535 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:09:45.741563 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:09:45.741598 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:09:45.741627 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:09:45.741654 kernel: ACPI: bus type drm_connector registered Jan 13 20:09:45.741682 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:09:45.741711 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:09:45.741739 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:09:45.741770 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:09:45.741803 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:09:45.741832 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:09:45.741860 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:09:45.741891 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:09:45.741920 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:09:45.741948 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:09:45.742019 systemd-journald[1478]: Collecting audit messages is disabled. Jan 13 20:09:45.742072 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:09:45.742101 systemd-journald[1478]: Journal started Jan 13 20:09:45.742157 systemd-journald[1478]: Runtime Journal (/run/log/journal/ec2738e24efa74d38e3a44786666d9bd) is 8.0M, max 75.3M, 67.3M free. Jan 13 20:09:45.096688 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:09:45.122872 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 20:09:45.123658 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:09:45.759555 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:09:45.759654 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:09:45.769144 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:09:45.780220 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:09:45.795016 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:09:45.805652 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:09:45.805754 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:09:45.821069 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:09:45.821167 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:09:45.840600 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:09:45.840690 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:09:45.852355 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:09:45.867957 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:09:45.879621 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:09:45.889967 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:09:45.919074 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:09:45.924150 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:09:45.928046 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:09:45.931861 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:09:45.935353 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:09:45.985183 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:09:46.000696 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:09:46.018703 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:09:46.024916 kernel: loop0: detected capacity change from 0 to 116784 Jan 13 20:09:46.040531 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:09:46.048075 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:09:46.066609 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:09:46.079591 systemd-journald[1478]: Time spent on flushing to /var/log/journal/ec2738e24efa74d38e3a44786666d9bd is 72.995ms for 899 entries. Jan 13 20:09:46.079591 systemd-journald[1478]: System Journal (/var/log/journal/ec2738e24efa74d38e3a44786666d9bd) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:09:46.181145 systemd-journald[1478]: Received client request to flush runtime journal. Jan 13 20:09:46.181561 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:09:46.181621 kernel: loop1: detected capacity change from 0 to 113552 Jan 13 20:09:46.093653 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Jan 13 20:09:46.093678 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Jan 13 20:09:46.102661 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:09:46.103797 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:09:46.118391 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:09:46.133565 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:09:46.164182 udevadm[1542]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:09:46.191050 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:09:46.225457 kernel: loop2: detected capacity change from 0 to 194096 Jan 13 20:09:46.246168 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:09:46.270373 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:09:46.329775 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Jan 13 20:09:46.330683 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Jan 13 20:09:46.341920 kernel: loop3: detected capacity change from 0 to 53784 Jan 13 20:09:46.348259 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:09:46.478296 kernel: loop4: detected capacity change from 0 to 116784 Jan 13 20:09:46.510307 kernel: loop5: detected capacity change from 0 to 113552 Jan 13 20:09:46.550404 kernel: loop6: detected capacity change from 0 to 194096 Jan 13 20:09:46.597252 kernel: loop7: detected capacity change from 0 to 53784 Jan 13 20:09:46.609833 (sd-merge)[1561]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 20:09:46.614100 (sd-merge)[1561]: Merged extensions into '/usr'. Jan 13 20:09:46.626514 systemd[1]: Reloading requested from client PID 1511 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:09:46.626549 systemd[1]: Reloading... Jan 13 20:09:46.806038 zram_generator::config[1587]: No configuration found. Jan 13 20:09:46.828828 ldconfig[1507]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:09:47.106810 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:09:47.225855 systemd[1]: Reloading finished in 598 ms. Jan 13 20:09:47.265886 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:09:47.269751 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:09:47.275529 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:09:47.291585 systemd[1]: Starting ensure-sysext.service... Jan 13 20:09:47.302746 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:09:47.312576 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:09:47.328446 systemd[1]: Reloading requested from client PID 1640 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:09:47.328477 systemd[1]: Reloading... Jan 13 20:09:47.351054 systemd-tmpfiles[1641]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:09:47.351630 systemd-tmpfiles[1641]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:09:47.353562 systemd-tmpfiles[1641]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:09:47.354149 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. Jan 13 20:09:47.354346 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. Jan 13 20:09:47.362436 systemd-tmpfiles[1641]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:09:47.362462 systemd-tmpfiles[1641]: Skipping /boot Jan 13 20:09:47.383820 systemd-tmpfiles[1641]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:09:47.383850 systemd-tmpfiles[1641]: Skipping /boot Jan 13 20:09:47.446103 systemd-udevd[1642]: Using default interface naming scheme 'v255'. Jan 13 20:09:47.507403 zram_generator::config[1671]: No configuration found. Jan 13 20:09:47.653846 (udev-worker)[1675]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:09:47.847506 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1675) Jan 13 20:09:47.917068 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:09:48.078320 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:09:48.078890 systemd[1]: Reloading finished in 749 ms. Jan 13 20:09:48.123453 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:09:48.130060 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:09:48.200267 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:09:48.219919 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:09:48.236198 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:09:48.244953 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:09:48.249748 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:09:48.255974 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:09:48.266819 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:09:48.274794 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:09:48.282967 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:09:48.289792 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:09:48.298835 lvm[1838]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:09:48.304814 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:09:48.313730 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:09:48.329846 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:09:48.341752 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:09:48.351670 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:09:48.365423 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:09:48.375163 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:09:48.376862 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:09:48.384753 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:09:48.385065 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:09:48.401950 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:09:48.407695 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:09:48.413662 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:09:48.413941 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:09:48.419404 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:09:48.420889 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:09:48.423310 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:09:48.424962 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:09:48.437316 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:09:48.443500 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:09:48.448057 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:09:48.448441 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:09:48.448776 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:09:48.467302 systemd[1]: Finished ensure-sysext.service. Jan 13 20:09:48.482100 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:09:48.482499 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:09:48.500549 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:09:48.511052 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:09:48.511504 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:09:48.524306 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:09:48.536189 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:09:48.566794 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:09:48.570063 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:09:48.570837 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:09:48.571890 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:09:48.574665 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:09:48.589786 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:09:48.590784 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:09:48.593313 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:09:48.594854 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:09:48.594945 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:09:48.612413 lvm[1872]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:09:48.620151 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:09:48.628530 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:09:48.633551 augenrules[1887]: No rules Jan 13 20:09:48.637599 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:09:48.640351 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:09:48.685412 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:09:48.691003 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:09:48.708170 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:09:48.720339 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:09:48.816843 systemd-networkd[1850]: lo: Link UP Jan 13 20:09:48.816874 systemd-networkd[1850]: lo: Gained carrier Jan 13 20:09:48.819886 systemd-networkd[1850]: Enumeration completed Jan 13 20:09:48.820092 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:09:48.826944 systemd-resolved[1851]: Positive Trust Anchors: Jan 13 20:09:48.827056 systemd-networkd[1850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:09:48.827063 systemd-networkd[1850]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:09:48.827614 systemd-resolved[1851]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:09:48.827769 systemd-resolved[1851]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:09:48.830462 systemd-networkd[1850]: eth0: Link UP Jan 13 20:09:48.831118 systemd-networkd[1850]: eth0: Gained carrier Jan 13 20:09:48.831170 systemd-networkd[1850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:09:48.833595 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:09:48.845063 systemd-resolved[1851]: Defaulting to hostname 'linux'. Jan 13 20:09:48.846412 systemd-networkd[1850]: eth0: DHCPv4 address 172.31.26.215/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:09:48.849776 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:09:48.853303 systemd[1]: Reached target network.target - Network. Jan 13 20:09:48.855609 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:09:48.858373 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:09:48.860965 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:09:48.863852 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:09:48.866884 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:09:48.869700 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:09:48.872707 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:09:48.875596 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:09:48.875642 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:09:48.877717 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:09:48.881677 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:09:48.886747 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:09:48.901439 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:09:48.905088 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:09:48.907992 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:09:48.910381 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:09:48.912619 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:09:48.912673 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:09:48.914971 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:09:48.922574 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:09:48.931204 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:09:48.945191 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:09:48.951483 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:09:48.954394 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:09:48.958539 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:09:48.967595 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 20:09:48.974541 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 20:09:48.987604 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:09:48.994466 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:09:49.012069 jq[1913]: false Jan 13 20:09:49.018596 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:09:49.024179 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:09:49.025064 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:09:49.028515 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:09:49.037461 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:09:49.044950 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:09:49.045331 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:09:49.068926 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:09:49.091623 (ntainerd)[1929]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:09:49.107507 dbus-daemon[1912]: [system] SELinux support is enabled Jan 13 20:09:49.107835 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:09:49.117585 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:09:49.117645 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:09:49.124405 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:09:49.124457 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:09:49.127402 ntpd[1916]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:25:48 UTC 2025 (1): Starting Jan 13 20:09:49.138715 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:25:48 UTC 2025 (1): Starting Jan 13 20:09:49.138715 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:09:49.138715 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: ---------------------------------------------------- Jan 13 20:09:49.138715 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:09:49.138715 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:09:49.138715 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: corporation. Support and training for ntp-4 are Jan 13 20:09:49.138715 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: available at https://www.nwtime.org/support Jan 13 20:09:49.138715 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: ---------------------------------------------------- Jan 13 20:09:49.138715 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: proto: precision = 0.096 usec (-23) Jan 13 20:09:49.127459 ntpd[1916]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:09:49.151350 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: basedate set to 2025-01-01 Jan 13 20:09:49.151350 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: gps base set to 2025-01-05 (week 2348) Jan 13 20:09:49.127480 ntpd[1916]: ---------------------------------------------------- Jan 13 20:09:49.127498 ntpd[1916]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:09:49.127515 ntpd[1916]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:09:49.127534 ntpd[1916]: corporation. Support and training for ntp-4 are Jan 13 20:09:49.127550 ntpd[1916]: available at https://www.nwtime.org/support Jan 13 20:09:49.127567 ntpd[1916]: ---------------------------------------------------- Jan 13 20:09:49.132168 ntpd[1916]: proto: precision = 0.096 usec (-23) Jan 13 20:09:49.140422 dbus-daemon[1912]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1850 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 20:09:49.142786 ntpd[1916]: basedate set to 2025-01-01 Jan 13 20:09:49.142819 ntpd[1916]: gps base set to 2025-01-05 (week 2348) Jan 13 20:09:49.157638 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 20:09:49.178400 coreos-metadata[1911]: Jan 13 20:09:49.176 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:09:49.178400 coreos-metadata[1911]: Jan 13 20:09:49.177 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 20:09:49.191062 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:09:49.191062 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:09:49.191062 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:09:49.191062 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: Listen normally on 3 eth0 172.31.26.215:123 Jan 13 20:09:49.191062 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: Listen normally on 4 lo [::1]:123 Jan 13 20:09:49.191062 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: bind(21) AF_INET6 fe80::46e:7bff:fe29:9931%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:09:49.191062 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: unable to create socket on eth0 (5) for fe80::46e:7bff:fe29:9931%2#123 Jan 13 20:09:49.191062 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: failed to init interface for address fe80::46e:7bff:fe29:9931%2 Jan 13 20:09:49.191062 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: Listening on routing socket on fd #21 for interface updates Jan 13 20:09:49.171094 ntpd[1916]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:09:49.191860 coreos-metadata[1911]: Jan 13 20:09:49.178 INFO Fetch successful Jan 13 20:09:49.191860 coreos-metadata[1911]: Jan 13 20:09:49.178 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 20:09:49.191860 coreos-metadata[1911]: Jan 13 20:09:49.179 INFO Fetch successful Jan 13 20:09:49.191860 coreos-metadata[1911]: Jan 13 20:09:49.179 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 20:09:49.191860 coreos-metadata[1911]: Jan 13 20:09:49.189 INFO Fetch successful Jan 13 20:09:49.191860 coreos-metadata[1911]: Jan 13 20:09:49.189 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 20:09:49.183667 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:09:49.171174 ntpd[1916]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:09:49.184052 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:09:49.174493 ntpd[1916]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:09:49.200890 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:09:49.200890 ntpd[1916]: 13 Jan 20:09:49 ntpd[1916]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:09:49.200982 coreos-metadata[1911]: Jan 13 20:09:49.192 INFO Fetch successful Jan 13 20:09:49.200982 coreos-metadata[1911]: Jan 13 20:09:49.192 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 20:09:49.200982 coreos-metadata[1911]: Jan 13 20:09:49.193 INFO Fetch failed with 404: resource not found Jan 13 20:09:49.200982 coreos-metadata[1911]: Jan 13 20:09:49.194 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 20:09:49.200982 coreos-metadata[1911]: Jan 13 20:09:49.196 INFO Fetch successful Jan 13 20:09:49.200982 coreos-metadata[1911]: Jan 13 20:09:49.197 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 20:09:49.200982 coreos-metadata[1911]: Jan 13 20:09:49.199 INFO Fetch successful Jan 13 20:09:49.200982 coreos-metadata[1911]: Jan 13 20:09:49.200 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 20:09:49.174568 ntpd[1916]: Listen normally on 3 eth0 172.31.26.215:123 Jan 13 20:09:49.174658 ntpd[1916]: Listen normally on 4 lo [::1]:123 Jan 13 20:09:49.174745 ntpd[1916]: bind(21) AF_INET6 fe80::46e:7bff:fe29:9931%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:09:49.215577 coreos-metadata[1911]: Jan 13 20:09:49.208 INFO Fetch successful Jan 13 20:09:49.215577 coreos-metadata[1911]: Jan 13 20:09:49.208 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 20:09:49.174785 ntpd[1916]: unable to create socket on eth0 (5) for fe80::46e:7bff:fe29:9931%2#123 Jan 13 20:09:49.174813 ntpd[1916]: failed to init interface for address fe80::46e:7bff:fe29:9931%2 Jan 13 20:09:49.174870 ntpd[1916]: Listening on routing socket on fd #21 for interface updates Jan 13 20:09:49.197353 ntpd[1916]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:09:49.197406 ntpd[1916]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:09:49.223127 coreos-metadata[1911]: Jan 13 20:09:49.217 INFO Fetch successful Jan 13 20:09:49.223127 coreos-metadata[1911]: Jan 13 20:09:49.217 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 20:09:49.223127 coreos-metadata[1911]: Jan 13 20:09:49.218 INFO Fetch successful Jan 13 20:09:49.223356 jq[1924]: true Jan 13 20:09:49.261260 extend-filesystems[1914]: Found loop4 Jan 13 20:09:49.261260 extend-filesystems[1914]: Found loop5 Jan 13 20:09:49.261260 extend-filesystems[1914]: Found loop6 Jan 13 20:09:49.261260 extend-filesystems[1914]: Found loop7 Jan 13 20:09:49.261260 extend-filesystems[1914]: Found nvme0n1 Jan 13 20:09:49.261260 extend-filesystems[1914]: Found nvme0n1p1 Jan 13 20:09:49.261260 extend-filesystems[1914]: Found nvme0n1p2 Jan 13 20:09:49.261260 extend-filesystems[1914]: Found nvme0n1p3 Jan 13 20:09:49.261260 extend-filesystems[1914]: Found usr Jan 13 20:09:49.252321 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:09:49.312789 extend-filesystems[1914]: Found nvme0n1p4 Jan 13 20:09:49.312789 extend-filesystems[1914]: Found nvme0n1p6 Jan 13 20:09:49.312789 extend-filesystems[1914]: Found nvme0n1p7 Jan 13 20:09:49.312789 extend-filesystems[1914]: Found nvme0n1p9 Jan 13 20:09:49.312789 extend-filesystems[1914]: Checking size of /dev/nvme0n1p9 Jan 13 20:09:49.253357 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:09:49.373300 update_engine[1923]: I20250113 20:09:49.369972 1923 main.cc:92] Flatcar Update Engine starting Jan 13 20:09:49.379478 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 20:09:49.397667 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:09:49.419251 update_engine[1923]: I20250113 20:09:49.412475 1923 update_check_scheduler.cc:74] Next update check in 4m57s Jan 13 20:09:49.421960 extend-filesystems[1914]: Resized partition /dev/nvme0n1p9 Jan 13 20:09:49.439680 jq[1949]: true Jan 13 20:09:49.436774 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:09:49.455245 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 20:09:49.455332 extend-filesystems[1969]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:09:49.470332 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:09:49.475828 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:09:49.559258 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 20:09:49.600394 extend-filesystems[1969]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 20:09:49.600394 extend-filesystems[1969]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:09:49.600394 extend-filesystems[1969]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 20:09:49.627952 extend-filesystems[1914]: Resized filesystem in /dev/nvme0n1p9 Jan 13 20:09:49.637054 bash[1987]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:09:49.613304 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:09:49.619086 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:09:49.631659 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:09:49.636454 systemd-logind[1922]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:09:49.636489 systemd-logind[1922]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 13 20:09:49.647286 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1675) Jan 13 20:09:49.651548 systemd-logind[1922]: New seat seat0. Jan 13 20:09:49.689064 dbus-daemon[1912]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 20:09:49.689772 dbus-daemon[1912]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=1936 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 20:09:49.702149 containerd[1929]: time="2025-01-13T20:09:49.701874945Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:09:49.717929 systemd[1]: Starting sshkeys.service... Jan 13 20:09:49.720986 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:09:49.729465 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 20:09:49.748672 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 20:09:49.772835 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:09:49.786572 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:09:49.805441 polkitd[2013]: Started polkitd version 121 Jan 13 20:09:49.820503 polkitd[2013]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 20:09:49.820759 polkitd[2013]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 20:09:49.825282 polkitd[2013]: Finished loading, compiling and executing 2 rules Jan 13 20:09:49.826939 dbus-daemon[1912]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 20:09:49.827184 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 20:09:49.835687 polkitd[2013]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 20:09:49.846811 containerd[1929]: time="2025-01-13T20:09:49.846102610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:09:49.849069 containerd[1929]: time="2025-01-13T20:09:49.848912182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:09:49.849069 containerd[1929]: time="2025-01-13T20:09:49.849003706Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:09:49.849069 containerd[1929]: time="2025-01-13T20:09:49.849067030Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:09:49.849628 containerd[1929]: time="2025-01-13T20:09:49.849582022Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:09:49.851245 containerd[1929]: time="2025-01-13T20:09:49.849753454Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:09:49.851245 containerd[1929]: time="2025-01-13T20:09:49.849929962Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:09:49.851245 containerd[1929]: time="2025-01-13T20:09:49.849960166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:09:49.851245 containerd[1929]: time="2025-01-13T20:09:49.850313446Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:09:49.851245 containerd[1929]: time="2025-01-13T20:09:49.850351006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:09:49.851245 containerd[1929]: time="2025-01-13T20:09:49.850383178Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:09:49.851245 containerd[1929]: time="2025-01-13T20:09:49.850408378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:09:49.851245 containerd[1929]: time="2025-01-13T20:09:49.850587310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:09:49.851245 containerd[1929]: time="2025-01-13T20:09:49.851014522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:09:49.857305 containerd[1929]: time="2025-01-13T20:09:49.856860514Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:09:49.857305 containerd[1929]: time="2025-01-13T20:09:49.856923934Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:09:49.858698 containerd[1929]: time="2025-01-13T20:09:49.857151730Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:09:49.858698 containerd[1929]: time="2025-01-13T20:09:49.858450442Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:09:49.871864 containerd[1929]: time="2025-01-13T20:09:49.871805914Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:09:49.872314 containerd[1929]: time="2025-01-13T20:09:49.872184766Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:09:49.872699 containerd[1929]: time="2025-01-13T20:09:49.872667934Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:09:49.873008 containerd[1929]: time="2025-01-13T20:09:49.872953534Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:09:49.873150 containerd[1929]: time="2025-01-13T20:09:49.873123238Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:09:49.876262 containerd[1929]: time="2025-01-13T20:09:49.873515602Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:09:49.876262 containerd[1929]: time="2025-01-13T20:09:49.874014910Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:09:49.880829 containerd[1929]: time="2025-01-13T20:09:49.880745686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:09:49.881029 containerd[1929]: time="2025-01-13T20:09:49.880998418Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:09:49.881172 containerd[1929]: time="2025-01-13T20:09:49.881144434Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:09:49.881383 containerd[1929]: time="2025-01-13T20:09:49.881352754Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:09:49.881520 containerd[1929]: time="2025-01-13T20:09:49.881493322Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:09:49.881649 containerd[1929]: time="2025-01-13T20:09:49.881622850Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:09:49.881831 containerd[1929]: time="2025-01-13T20:09:49.881785906Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:09:49.881981 containerd[1929]: time="2025-01-13T20:09:49.881952994Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:09:49.883477 systemd-hostnamed[1936]: Hostname set to (transient) Jan 13 20:09:49.884525 containerd[1929]: time="2025-01-13T20:09:49.882327238Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:09:49.884525 containerd[1929]: time="2025-01-13T20:09:49.883412578Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:09:49.884525 containerd[1929]: time="2025-01-13T20:09:49.883512394Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:09:49.884525 containerd[1929]: time="2025-01-13T20:09:49.883890766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:09:49.883662 systemd-resolved[1851]: System hostname changed to 'ip-172-31-26-215'. Jan 13 20:09:49.887285 containerd[1929]: time="2025-01-13T20:09:49.883938526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:09:49.887285 containerd[1929]: time="2025-01-13T20:09:49.886256470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:09:49.887285 containerd[1929]: time="2025-01-13T20:09:49.886356934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:09:49.887285 containerd[1929]: time="2025-01-13T20:09:49.886397650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:09:49.887285 containerd[1929]: time="2025-01-13T20:09:49.886462222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:09:49.887285 containerd[1929]: time="2025-01-13T20:09:49.886516474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:09:49.887285 containerd[1929]: time="2025-01-13T20:09:49.886552054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:09:49.887285 containerd[1929]: time="2025-01-13T20:09:49.886624330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:09:49.889260 containerd[1929]: time="2025-01-13T20:09:49.887750782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:09:49.890352 containerd[1929]: time="2025-01-13T20:09:49.890282902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:09:49.891227 containerd[1929]: time="2025-01-13T20:09:49.890510038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:09:49.891227 containerd[1929]: time="2025-01-13T20:09:49.890575030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:09:49.891227 containerd[1929]: time="2025-01-13T20:09:49.890632690Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:09:49.891227 containerd[1929]: time="2025-01-13T20:09:49.890716354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:09:49.891227 containerd[1929]: time="2025-01-13T20:09:49.890777938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:09:49.891227 containerd[1929]: time="2025-01-13T20:09:49.890806978Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:09:49.891227 containerd[1929]: time="2025-01-13T20:09:49.891030706Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:09:49.891227 containerd[1929]: time="2025-01-13T20:09:49.891075358Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:09:49.891227 containerd[1929]: time="2025-01-13T20:09:49.891124054Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:09:49.891227 containerd[1929]: time="2025-01-13T20:09:49.891158014Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:09:49.891227 containerd[1929]: time="2025-01-13T20:09:49.891182350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:09:49.892415 containerd[1929]: time="2025-01-13T20:09:49.891831910Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:09:49.892415 containerd[1929]: time="2025-01-13T20:09:49.892242514Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:09:49.892415 containerd[1929]: time="2025-01-13T20:09:49.892277818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:09:49.896108 containerd[1929]: time="2025-01-13T20:09:49.895656010Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:09:49.896108 containerd[1929]: time="2025-01-13T20:09:49.895800070Z" level=info msg="Connect containerd service" Jan 13 20:09:49.896108 containerd[1929]: time="2025-01-13T20:09:49.895903054Z" level=info msg="using legacy CRI server" Jan 13 20:09:49.896108 containerd[1929]: time="2025-01-13T20:09:49.895923886Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:09:49.899718 containerd[1929]: time="2025-01-13T20:09:49.897429826Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:09:49.901924 containerd[1929]: time="2025-01-13T20:09:49.901855318Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:09:49.902502 containerd[1929]: time="2025-01-13T20:09:49.902457322Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:09:49.902603 containerd[1929]: time="2025-01-13T20:09:49.902568418Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:09:49.902713 containerd[1929]: time="2025-01-13T20:09:49.902660530Z" level=info msg="Start subscribing containerd event" Jan 13 20:09:49.902767 containerd[1929]: time="2025-01-13T20:09:49.902730142Z" level=info msg="Start recovering state" Jan 13 20:09:49.909649 containerd[1929]: time="2025-01-13T20:09:49.902842738Z" level=info msg="Start event monitor" Jan 13 20:09:49.909649 containerd[1929]: time="2025-01-13T20:09:49.902881282Z" level=info msg="Start snapshots syncer" Jan 13 20:09:49.909649 containerd[1929]: time="2025-01-13T20:09:49.902921590Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:09:49.909649 containerd[1929]: time="2025-01-13T20:09:49.902941330Z" level=info msg="Start streaming server" Jan 13 20:09:49.909649 containerd[1929]: time="2025-01-13T20:09:49.903075454Z" level=info msg="containerd successfully booted in 0.204754s" Jan 13 20:09:49.903192 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:09:50.025403 coreos-metadata[2015]: Jan 13 20:09:50.024 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:09:50.029322 coreos-metadata[2015]: Jan 13 20:09:50.026 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 20:09:50.029322 coreos-metadata[2015]: Jan 13 20:09:50.027 INFO Fetch successful Jan 13 20:09:50.029322 coreos-metadata[2015]: Jan 13 20:09:50.027 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:09:50.029322 coreos-metadata[2015]: Jan 13 20:09:50.028 INFO Fetch successful Jan 13 20:09:50.032758 unknown[2015]: wrote ssh authorized keys file for user: core Jan 13 20:09:50.074341 locksmithd[1968]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:09:50.107334 update-ssh-keys[2090]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:09:50.102292 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:09:50.119871 systemd[1]: Finished sshkeys.service. Jan 13 20:09:50.128099 ntpd[1916]: bind(24) AF_INET6 fe80::46e:7bff:fe29:9931%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:09:50.128755 ntpd[1916]: 13 Jan 20:09:50 ntpd[1916]: bind(24) AF_INET6 fe80::46e:7bff:fe29:9931%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:09:50.128755 ntpd[1916]: 13 Jan 20:09:50 ntpd[1916]: unable to create socket on eth0 (6) for fe80::46e:7bff:fe29:9931%2#123 Jan 13 20:09:50.128755 ntpd[1916]: 13 Jan 20:09:50 ntpd[1916]: failed to init interface for address fe80::46e:7bff:fe29:9931%2 Jan 13 20:09:50.128153 ntpd[1916]: unable to create socket on eth0 (6) for fe80::46e:7bff:fe29:9931%2#123 Jan 13 20:09:50.128182 ntpd[1916]: failed to init interface for address fe80::46e:7bff:fe29:9931%2 Jan 13 20:09:50.736393 systemd-networkd[1850]: eth0: Gained IPv6LL Jan 13 20:09:50.741272 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:09:50.746573 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:09:50.760587 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 20:09:50.775303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:09:50.783685 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:09:50.889776 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:09:50.901197 amazon-ssm-agent[2114]: Initializing new seelog logger Jan 13 20:09:50.901738 amazon-ssm-agent[2114]: New Seelog Logger Creation Complete Jan 13 20:09:50.901738 amazon-ssm-agent[2114]: 2025/01/13 20:09:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:09:50.901738 amazon-ssm-agent[2114]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:09:50.903293 amazon-ssm-agent[2114]: 2025/01/13 20:09:50 processing appconfig overrides Jan 13 20:09:50.903293 amazon-ssm-agent[2114]: 2025/01/13 20:09:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:09:50.903293 amazon-ssm-agent[2114]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:09:50.903293 amazon-ssm-agent[2114]: 2025/01/13 20:09:50 processing appconfig overrides Jan 13 20:09:50.904362 amazon-ssm-agent[2114]: 2025/01/13 20:09:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:09:50.904362 amazon-ssm-agent[2114]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:09:50.904489 amazon-ssm-agent[2114]: 2025/01/13 20:09:50 processing appconfig overrides Jan 13 20:09:50.905274 amazon-ssm-agent[2114]: 2025-01-13 20:09:50 INFO Proxy environment variables: Jan 13 20:09:50.910044 amazon-ssm-agent[2114]: 2025/01/13 20:09:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:09:50.910044 amazon-ssm-agent[2114]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:09:50.910270 amazon-ssm-agent[2114]: 2025/01/13 20:09:50 processing appconfig overrides Jan 13 20:09:51.005193 amazon-ssm-agent[2114]: 2025-01-13 20:09:50 INFO https_proxy: Jan 13 20:09:51.105317 amazon-ssm-agent[2114]: 2025-01-13 20:09:50 INFO http_proxy: Jan 13 20:09:51.203645 amazon-ssm-agent[2114]: 2025-01-13 20:09:50 INFO no_proxy: Jan 13 20:09:51.303244 amazon-ssm-agent[2114]: 2025-01-13 20:09:50 INFO Checking if agent identity type OnPrem can be assumed Jan 13 20:09:51.401283 amazon-ssm-agent[2114]: 2025-01-13 20:09:50 INFO Checking if agent identity type EC2 can be assumed Jan 13 20:09:51.500720 amazon-ssm-agent[2114]: 2025-01-13 20:09:50 INFO Agent will take identity from EC2 Jan 13 20:09:51.602750 amazon-ssm-agent[2114]: 2025-01-13 20:09:51 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:09:51.702173 amazon-ssm-agent[2114]: 2025-01-13 20:09:51 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:09:51.803281 amazon-ssm-agent[2114]: 2025-01-13 20:09:51 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:09:51.903177 amazon-ssm-agent[2114]: 2025-01-13 20:09:51 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 20:09:51.958580 amazon-ssm-agent[2114]: 2025-01-13 20:09:51 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 13 20:09:51.958580 amazon-ssm-agent[2114]: 2025-01-13 20:09:51 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 20:09:51.958580 amazon-ssm-agent[2114]: 2025-01-13 20:09:51 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 20:09:51.958580 amazon-ssm-agent[2114]: 2025-01-13 20:09:51 INFO [Registrar] Starting registrar module Jan 13 20:09:51.958580 amazon-ssm-agent[2114]: 2025-01-13 20:09:51 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 20:09:51.958580 amazon-ssm-agent[2114]: 2025-01-13 20:09:51 INFO [EC2Identity] EC2 registration was successful. Jan 13 20:09:51.958580 amazon-ssm-agent[2114]: 2025-01-13 20:09:51 INFO [CredentialRefresher] credentialRefresher has started Jan 13 20:09:51.958580 amazon-ssm-agent[2114]: 2025-01-13 20:09:51 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 20:09:51.958580 amazon-ssm-agent[2114]: 2025-01-13 20:09:51 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 20:09:52.002813 amazon-ssm-agent[2114]: 2025-01-13 20:09:51 INFO [CredentialRefresher] Next credential rotation will be in 31.258323851533333 minutes Jan 13 20:09:52.363602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:09:52.378104 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:09:52.750057 sshd_keygen[1955]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:09:52.791294 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:09:52.803951 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:09:52.816681 systemd[1]: Started sshd@0-172.31.26.215:22-147.75.109.163:48836.service - OpenSSH per-connection server daemon (147.75.109.163:48836). Jan 13 20:09:52.833796 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:09:52.836327 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:09:52.849707 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:09:52.895758 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:09:52.917603 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:09:52.922836 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:09:52.928202 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:09:52.932591 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:09:52.937249 systemd[1]: Startup finished in 1.069s (kernel) + 7.322s (initrd) + 8.880s (userspace) = 17.271s. Jan 13 20:09:52.984101 agetty[2158]: failed to open credentials directory Jan 13 20:09:52.987117 agetty[2160]: failed to open credentials directory Jan 13 20:09:53.001174 amazon-ssm-agent[2114]: 2025-01-13 20:09:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 20:09:53.096805 sshd[2152]: Accepted publickey for core from 147.75.109.163 port 48836 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:09:53.102200 amazon-ssm-agent[2114]: 2025-01-13 20:09:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2166) started Jan 13 20:09:53.102603 sshd-session[2152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:53.128104 ntpd[1916]: Listen normally on 7 eth0 [fe80::46e:7bff:fe29:9931%2]:123 Jan 13 20:09:53.128172 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:09:53.129568 ntpd[1916]: 13 Jan 20:09:53 ntpd[1916]: Listen normally on 7 eth0 [fe80::46e:7bff:fe29:9931%2]:123 Jan 13 20:09:53.136835 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:09:53.145983 systemd-logind[1922]: New session 1 of user core. Jan 13 20:09:53.181927 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:09:53.192998 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:09:53.206569 amazon-ssm-agent[2114]: 2025-01-13 20:09:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 20:09:53.221081 (systemd)[2174]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:09:53.462068 systemd[2174]: Queued start job for default target default.target. Jan 13 20:09:53.473587 systemd[2174]: Created slice app.slice - User Application Slice. Jan 13 20:09:53.473830 systemd[2174]: Reached target paths.target - Paths. Jan 13 20:09:53.473964 systemd[2174]: Reached target timers.target - Timers. Jan 13 20:09:53.476833 systemd[2174]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:09:53.526763 systemd[2174]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:09:53.527080 systemd[2174]: Reached target sockets.target - Sockets. Jan 13 20:09:53.527119 systemd[2174]: Reached target basic.target - Basic System. Jan 13 20:09:53.529372 systemd[2174]: Reached target default.target - Main User Target. Jan 13 20:09:53.529467 systemd[2174]: Startup finished in 288ms. Jan 13 20:09:53.529646 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:09:53.536523 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:09:53.602264 kubelet[2138]: E0113 20:09:53.602168 2138 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:09:53.607056 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:09:53.607420 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:09:53.608128 systemd[1]: kubelet.service: Consumed 1.284s CPU time. Jan 13 20:09:53.691798 systemd[1]: Started sshd@1-172.31.26.215:22-147.75.109.163:48848.service - OpenSSH per-connection server daemon (147.75.109.163:48848). Jan 13 20:09:53.883020 sshd[2193]: Accepted publickey for core from 147.75.109.163 port 48848 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:09:53.885453 sshd-session[2193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:53.892746 systemd-logind[1922]: New session 2 of user core. Jan 13 20:09:53.904502 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:09:54.035307 sshd[2195]: Connection closed by 147.75.109.163 port 48848 Jan 13 20:09:54.036061 sshd-session[2193]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:54.042169 systemd[1]: sshd@1-172.31.26.215:22-147.75.109.163:48848.service: Deactivated successfully. Jan 13 20:09:54.045368 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:09:54.046817 systemd-logind[1922]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:09:54.048725 systemd-logind[1922]: Removed session 2. Jan 13 20:09:54.073710 systemd[1]: Started sshd@2-172.31.26.215:22-147.75.109.163:48850.service - OpenSSH per-connection server daemon (147.75.109.163:48850). Jan 13 20:09:54.260564 sshd[2200]: Accepted publickey for core from 147.75.109.163 port 48850 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:09:54.262994 sshd-session[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:54.272522 systemd-logind[1922]: New session 3 of user core. Jan 13 20:09:54.280534 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:09:54.396127 sshd[2202]: Connection closed by 147.75.109.163 port 48850 Jan 13 20:09:54.397003 sshd-session[2200]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:54.403349 systemd-logind[1922]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:09:54.403730 systemd[1]: sshd@2-172.31.26.215:22-147.75.109.163:48850.service: Deactivated successfully. Jan 13 20:09:54.407712 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:09:54.409322 systemd-logind[1922]: Removed session 3. Jan 13 20:09:54.432753 systemd[1]: Started sshd@3-172.31.26.215:22-147.75.109.163:48866.service - OpenSSH per-connection server daemon (147.75.109.163:48866). Jan 13 20:09:54.630012 sshd[2207]: Accepted publickey for core from 147.75.109.163 port 48866 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:09:54.632403 sshd-session[2207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:54.639564 systemd-logind[1922]: New session 4 of user core. Jan 13 20:09:54.647496 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:09:54.775796 sshd[2209]: Connection closed by 147.75.109.163 port 48866 Jan 13 20:09:54.775093 sshd-session[2207]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:54.780137 systemd[1]: sshd@3-172.31.26.215:22-147.75.109.163:48866.service: Deactivated successfully. Jan 13 20:09:54.782734 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:09:54.786149 systemd-logind[1922]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:09:54.788138 systemd-logind[1922]: Removed session 4. Jan 13 20:09:54.816682 systemd[1]: Started sshd@4-172.31.26.215:22-147.75.109.163:48874.service - OpenSSH per-connection server daemon (147.75.109.163:48874). Jan 13 20:09:54.991226 sshd[2214]: Accepted publickey for core from 147.75.109.163 port 48874 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:09:54.993555 sshd-session[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:55.000511 systemd-logind[1922]: New session 5 of user core. Jan 13 20:09:55.008471 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:09:55.121562 sudo[2217]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:09:55.122161 sudo[2217]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:09:55.138848 sudo[2217]: pam_unix(sudo:session): session closed for user root Jan 13 20:09:55.162983 sshd[2216]: Connection closed by 147.75.109.163 port 48874 Jan 13 20:09:55.161853 sshd-session[2214]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:55.167457 systemd[1]: sshd@4-172.31.26.215:22-147.75.109.163:48874.service: Deactivated successfully. Jan 13 20:09:55.170808 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:09:55.174027 systemd-logind[1922]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:09:55.175981 systemd-logind[1922]: Removed session 5. Jan 13 20:09:55.204717 systemd[1]: Started sshd@5-172.31.26.215:22-147.75.109.163:48888.service - OpenSSH per-connection server daemon (147.75.109.163:48888). Jan 13 20:09:55.384257 sshd[2222]: Accepted publickey for core from 147.75.109.163 port 48888 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:09:55.386755 sshd-session[2222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:55.394614 systemd-logind[1922]: New session 6 of user core. Jan 13 20:09:55.403464 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:09:55.506556 sudo[2226]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:09:55.507171 sudo[2226]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:09:55.513290 sudo[2226]: pam_unix(sudo:session): session closed for user root Jan 13 20:09:55.523167 sudo[2225]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:09:55.524320 sudo[2225]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:09:55.546341 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:09:55.605402 augenrules[2248]: No rules Jan 13 20:09:55.607835 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:09:55.609363 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:09:55.611267 sudo[2225]: pam_unix(sudo:session): session closed for user root Jan 13 20:09:55.635485 sshd[2224]: Connection closed by 147.75.109.163 port 48888 Jan 13 20:09:55.635296 sshd-session[2222]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:55.640754 systemd[1]: sshd@5-172.31.26.215:22-147.75.109.163:48888.service: Deactivated successfully. Jan 13 20:09:55.643557 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:09:55.646481 systemd-logind[1922]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:09:55.648134 systemd-logind[1922]: Removed session 6. Jan 13 20:09:55.674702 systemd[1]: Started sshd@6-172.31.26.215:22-147.75.109.163:48898.service - OpenSSH per-connection server daemon (147.75.109.163:48898). Jan 13 20:09:55.853082 sshd[2256]: Accepted publickey for core from 147.75.109.163 port 48898 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:09:55.855453 sshd-session[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:55.863542 systemd-logind[1922]: New session 7 of user core. Jan 13 20:09:55.873447 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:09:55.975557 sudo[2259]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:09:55.976151 sudo[2259]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:09:57.038198 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:09:57.039023 systemd[1]: kubelet.service: Consumed 1.284s CPU time. Jan 13 20:09:57.050769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:09:57.079777 systemd[1]: Reloading requested from client PID 2295 ('systemctl') (unit session-7.scope)... Jan 13 20:09:57.079959 systemd[1]: Reloading... Jan 13 20:09:57.311405 zram_generator::config[2334]: No configuration found. Jan 13 20:09:57.541347 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:09:57.712833 systemd[1]: Reloading finished in 632 ms. Jan 13 20:09:57.802671 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:09:57.811152 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:09:57.811660 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:09:57.817824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:09:58.106768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:09:58.123938 (kubelet)[2399]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:09:58.203961 kubelet[2399]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:09:58.203961 kubelet[2399]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:09:58.203961 kubelet[2399]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:09:58.205737 kubelet[2399]: I0113 20:09:58.205662 2399 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:09:59.649230 kubelet[2399]: I0113 20:09:59.648851 2399 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:09:59.649230 kubelet[2399]: I0113 20:09:59.648893 2399 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:09:59.649819 kubelet[2399]: I0113 20:09:59.649283 2399 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:09:59.678649 kubelet[2399]: I0113 20:09:59.678435 2399 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:09:59.692314 kubelet[2399]: I0113 20:09:59.692279 2399 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:09:59.697402 kubelet[2399]: I0113 20:09:59.697346 2399 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:09:59.698281 kubelet[2399]: I0113 20:09:59.697548 2399 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.26.215","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:09:59.698281 kubelet[2399]: I0113 20:09:59.697864 2399 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:09:59.698281 kubelet[2399]: I0113 20:09:59.697886 2399 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:09:59.698281 kubelet[2399]: I0113 20:09:59.698124 2399 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:09:59.699794 kubelet[2399]: I0113 20:09:59.699752 2399 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:09:59.699887 kubelet[2399]: I0113 20:09:59.699797 2399 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:09:59.699887 kubelet[2399]: I0113 20:09:59.699868 2399 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:09:59.700015 kubelet[2399]: I0113 20:09:59.699934 2399 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:09:59.702243 kubelet[2399]: E0113 20:09:59.700686 2399 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:59.702243 kubelet[2399]: E0113 20:09:59.701447 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:59.703043 kubelet[2399]: I0113 20:09:59.702948 2399 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:09:59.703451 kubelet[2399]: I0113 20:09:59.703409 2399 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:09:59.703536 kubelet[2399]: W0113 20:09:59.703511 2399 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:09:59.706014 kubelet[2399]: I0113 20:09:59.705771 2399 server.go:1264] "Started kubelet" Jan 13 20:09:59.709515 kubelet[2399]: I0113 20:09:59.709426 2399 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:09:59.710880 kubelet[2399]: I0113 20:09:59.710674 2399 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:09:59.712884 kubelet[2399]: I0113 20:09:59.712638 2399 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:09:59.716138 kubelet[2399]: I0113 20:09:59.716010 2399 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:09:59.717598 kubelet[2399]: I0113 20:09:59.716488 2399 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:09:59.725943 kubelet[2399]: I0113 20:09:59.725898 2399 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:09:59.727805 kubelet[2399]: I0113 20:09:59.727752 2399 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:09:59.727947 kubelet[2399]: I0113 20:09:59.727912 2399 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:09:59.731410 kubelet[2399]: W0113 20:09:59.731363 2399 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.26.215" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 20:09:59.731410 kubelet[2399]: E0113 20:09:59.731419 2399 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.26.215" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 20:09:59.731641 kubelet[2399]: W0113 20:09:59.731627 2399 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 20:09:59.731715 kubelet[2399]: E0113 20:09:59.731655 2399 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 20:09:59.739858 kubelet[2399]: I0113 20:09:59.738371 2399 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:09:59.739858 kubelet[2399]: E0113 20:09:59.739132 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.26.215\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 13 20:09:59.739858 kubelet[2399]: W0113 20:09:59.739373 2399 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 20:09:59.739858 kubelet[2399]: E0113 20:09:59.739408 2399 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 20:09:59.747633 kubelet[2399]: I0113 20:09:59.745537 2399 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:09:59.747633 kubelet[2399]: I0113 20:09:59.745574 2399 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:09:59.749279 kubelet[2399]: E0113 20:09:59.749231 2399 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:09:59.771028 kubelet[2399]: E0113 20:09:59.770835 2399 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.26.215.181a597cffb6c9bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.26.215,UID:172.31.26.215,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.26.215,},FirstTimestamp:2025-01-13 20:09:59.705733565 +0000 UTC m=+1.575662913,LastTimestamp:2025-01-13 20:09:59.705733565 +0000 UTC m=+1.575662913,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.26.215,}" Jan 13 20:09:59.778753 kubelet[2399]: E0113 20:09:59.777949 2399 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.26.215.181a597d024d9ad2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.26.215,UID:172.31.26.215,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.26.215,},FirstTimestamp:2025-01-13 20:09:59.749171922 +0000 UTC m=+1.619101306,LastTimestamp:2025-01-13 20:09:59.749171922 +0000 UTC m=+1.619101306,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.26.215,}" Jan 13 20:09:59.779426 kubelet[2399]: I0113 20:09:59.779395 2399 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:09:59.779654 kubelet[2399]: I0113 20:09:59.779623 2399 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:09:59.779740 kubelet[2399]: I0113 20:09:59.779676 2399 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:09:59.786437 kubelet[2399]: I0113 20:09:59.786379 2399 policy_none.go:49] "None policy: Start" Jan 13 20:09:59.788100 kubelet[2399]: I0113 20:09:59.787887 2399 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:09:59.788100 kubelet[2399]: I0113 20:09:59.787945 2399 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:09:59.808611 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:09:59.827937 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:09:59.830247 kubelet[2399]: I0113 20:09:59.829344 2399 kubelet_node_status.go:73] "Attempting to register node" node="172.31.26.215" Jan 13 20:09:59.836960 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:09:59.840401 kubelet[2399]: I0113 20:09:59.840142 2399 kubelet_node_status.go:76] "Successfully registered node" node="172.31.26.215" Jan 13 20:09:59.842528 kubelet[2399]: I0113 20:09:59.842462 2399 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:09:59.844934 kubelet[2399]: I0113 20:09:59.844895 2399 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:09:59.845293 kubelet[2399]: I0113 20:09:59.845130 2399 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:09:59.846489 kubelet[2399]: I0113 20:09:59.845199 2399 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:09:59.847258 kubelet[2399]: E0113 20:09:59.846662 2399 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:09:59.849948 kubelet[2399]: I0113 20:09:59.848144 2399 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:09:59.849948 kubelet[2399]: I0113 20:09:59.849605 2399 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:09:59.849948 kubelet[2399]: I0113 20:09:59.849771 2399 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:09:59.862670 kubelet[2399]: E0113 20:09:59.861709 2399 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.26.215\" not found" Jan 13 20:09:59.950872 kubelet[2399]: E0113 20:09:59.950120 2399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.26.215\" not found" Jan 13 20:10:00.050663 kubelet[2399]: E0113 20:10:00.050606 2399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.26.215\" not found" Jan 13 20:10:00.151746 kubelet[2399]: E0113 20:10:00.151681 2399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.26.215\" not found" Jan 13 20:10:00.176367 sudo[2259]: pam_unix(sudo:session): session closed for user root Jan 13 20:10:00.199536 sshd[2258]: Connection closed by 147.75.109.163 port 48898 Jan 13 20:10:00.200347 sshd-session[2256]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:00.207065 systemd[1]: sshd@6-172.31.26.215:22-147.75.109.163:48898.service: Deactivated successfully. Jan 13 20:10:00.211543 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:10:00.214536 systemd-logind[1922]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:10:00.216475 systemd-logind[1922]: Removed session 7. Jan 13 20:10:00.252836 kubelet[2399]: E0113 20:10:00.252791 2399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.26.215\" not found" Jan 13 20:10:00.353447 kubelet[2399]: E0113 20:10:00.353399 2399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.26.215\" not found" Jan 13 20:10:00.454026 kubelet[2399]: E0113 20:10:00.453984 2399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.26.215\" not found" Jan 13 20:10:00.554631 kubelet[2399]: E0113 20:10:00.554584 2399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.26.215\" not found" Jan 13 20:10:00.652354 kubelet[2399]: I0113 20:10:00.652277 2399 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 20:10:00.652883 kubelet[2399]: W0113 20:10:00.652518 2399 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:10:00.655520 kubelet[2399]: E0113 20:10:00.655461 2399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.26.215\" not found" Jan 13 20:10:00.701941 kubelet[2399]: E0113 20:10:00.701897 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:00.756629 kubelet[2399]: E0113 20:10:00.756578 2399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.26.215\" not found" Jan 13 20:10:00.857729 kubelet[2399]: E0113 20:10:00.857597 2399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.26.215\" not found" Jan 13 20:10:00.958085 kubelet[2399]: E0113 20:10:00.958027 2399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.26.215\" not found" Jan 13 20:10:01.058616 kubelet[2399]: E0113 20:10:01.058568 2399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.26.215\" not found" Jan 13 20:10:01.159351 kubelet[2399]: E0113 20:10:01.159252 2399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.26.215\" not found" Jan 13 20:10:01.259374 kubelet[2399]: E0113 20:10:01.259316 2399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.26.215\" not found" Jan 13 20:10:01.360944 kubelet[2399]: I0113 20:10:01.360871 2399 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 20:10:01.361578 containerd[1929]: time="2025-01-13T20:10:01.361524654Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:10:01.362990 kubelet[2399]: I0113 20:10:01.362578 2399 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 20:10:01.702025 kubelet[2399]: E0113 20:10:01.701971 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:01.702594 kubelet[2399]: I0113 20:10:01.702046 2399 apiserver.go:52] "Watching apiserver" Jan 13 20:10:01.711397 kubelet[2399]: I0113 20:10:01.711336 2399 topology_manager.go:215] "Topology Admit Handler" podUID="5cddd1e2-6d9e-4325-852d-c943445266fc" podNamespace="calico-system" podName="calico-node-shb9d" Jan 13 20:10:01.712804 kubelet[2399]: I0113 20:10:01.711487 2399 topology_manager.go:215] "Topology Admit Handler" podUID="eaac7307-8099-4833-a054-611983b758f2" podNamespace="calico-system" podName="csi-node-driver-57qj4" Jan 13 20:10:01.712804 kubelet[2399]: I0113 20:10:01.711604 2399 topology_manager.go:215] "Topology Admit Handler" podUID="46aeaaeb-d9ac-4b44-9de6-07a71c67ea74" podNamespace="kube-system" podName="kube-proxy-tkkr4" Jan 13 20:10:01.712804 kubelet[2399]: E0113 20:10:01.711779 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57qj4" podUID="eaac7307-8099-4833-a054-611983b758f2" Jan 13 20:10:01.723569 systemd[1]: Created slice kubepods-besteffort-pod5cddd1e2_6d9e_4325_852d_c943445266fc.slice - libcontainer container kubepods-besteffort-pod5cddd1e2_6d9e_4325_852d_c943445266fc.slice. Jan 13 20:10:01.728953 kubelet[2399]: I0113 20:10:01.728831 2399 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:10:01.739015 kubelet[2399]: I0113 20:10:01.738549 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eaac7307-8099-4833-a054-611983b758f2-socket-dir\") pod \"csi-node-driver-57qj4\" (UID: \"eaac7307-8099-4833-a054-611983b758f2\") " pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:01.739015 kubelet[2399]: I0113 20:10:01.738638 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/46aeaaeb-d9ac-4b44-9de6-07a71c67ea74-kube-proxy\") pod \"kube-proxy-tkkr4\" (UID: \"46aeaaeb-d9ac-4b44-9de6-07a71c67ea74\") " pod="kube-system/kube-proxy-tkkr4" Jan 13 20:10:01.739015 kubelet[2399]: I0113 20:10:01.738690 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46aeaaeb-d9ac-4b44-9de6-07a71c67ea74-lib-modules\") pod \"kube-proxy-tkkr4\" (UID: \"46aeaaeb-d9ac-4b44-9de6-07a71c67ea74\") " pod="kube-system/kube-proxy-tkkr4" Jan 13 20:10:01.739015 kubelet[2399]: I0113 20:10:01.738732 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cddd1e2-6d9e-4325-852d-c943445266fc-xtables-lock\") pod \"calico-node-shb9d\" (UID: \"5cddd1e2-6d9e-4325-852d-c943445266fc\") " pod="calico-system/calico-node-shb9d" Jan 13 20:10:01.739015 kubelet[2399]: I0113 20:10:01.738803 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5cddd1e2-6d9e-4325-852d-c943445266fc-policysync\") pod \"calico-node-shb9d\" (UID: \"5cddd1e2-6d9e-4325-852d-c943445266fc\") " pod="calico-system/calico-node-shb9d" Jan 13 20:10:01.739381 kubelet[2399]: I0113 20:10:01.738879 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5cddd1e2-6d9e-4325-852d-c943445266fc-node-certs\") pod \"calico-node-shb9d\" (UID: \"5cddd1e2-6d9e-4325-852d-c943445266fc\") " pod="calico-system/calico-node-shb9d" Jan 13 20:10:01.739381 kubelet[2399]: I0113 20:10:01.738926 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5cddd1e2-6d9e-4325-852d-c943445266fc-var-run-calico\") pod \"calico-node-shb9d\" (UID: \"5cddd1e2-6d9e-4325-852d-c943445266fc\") " pod="calico-system/calico-node-shb9d" Jan 13 20:10:01.742584 kubelet[2399]: I0113 20:10:01.741280 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46aeaaeb-d9ac-4b44-9de6-07a71c67ea74-xtables-lock\") pod \"kube-proxy-tkkr4\" (UID: \"46aeaaeb-d9ac-4b44-9de6-07a71c67ea74\") " pod="kube-system/kube-proxy-tkkr4" Jan 13 20:10:01.742584 kubelet[2399]: I0113 20:10:01.741390 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5cddd1e2-6d9e-4325-852d-c943445266fc-cni-log-dir\") pod \"calico-node-shb9d\" (UID: \"5cddd1e2-6d9e-4325-852d-c943445266fc\") " pod="calico-system/calico-node-shb9d" Jan 13 20:10:01.742584 kubelet[2399]: I0113 20:10:01.741433 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eaac7307-8099-4833-a054-611983b758f2-registration-dir\") pod \"csi-node-driver-57qj4\" (UID: \"eaac7307-8099-4833-a054-611983b758f2\") " pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:01.742584 kubelet[2399]: I0113 20:10:01.741488 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkfgv\" (UniqueName: \"kubernetes.io/projected/eaac7307-8099-4833-a054-611983b758f2-kube-api-access-lkfgv\") pod \"csi-node-driver-57qj4\" (UID: \"eaac7307-8099-4833-a054-611983b758f2\") " pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:01.742584 kubelet[2399]: I0113 20:10:01.741534 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/eaac7307-8099-4833-a054-611983b758f2-varrun\") pod \"csi-node-driver-57qj4\" (UID: \"eaac7307-8099-4833-a054-611983b758f2\") " pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:01.741556 systemd[1]: Created slice kubepods-besteffort-pod46aeaaeb_d9ac_4b44_9de6_07a71c67ea74.slice - libcontainer container kubepods-besteffort-pod46aeaaeb_d9ac_4b44_9de6_07a71c67ea74.slice. Jan 13 20:10:01.743107 kubelet[2399]: I0113 20:10:01.741592 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cddd1e2-6d9e-4325-852d-c943445266fc-lib-modules\") pod \"calico-node-shb9d\" (UID: \"5cddd1e2-6d9e-4325-852d-c943445266fc\") " pod="calico-system/calico-node-shb9d" Jan 13 20:10:01.743107 kubelet[2399]: I0113 20:10:01.741642 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cddd1e2-6d9e-4325-852d-c943445266fc-tigera-ca-bundle\") pod \"calico-node-shb9d\" (UID: \"5cddd1e2-6d9e-4325-852d-c943445266fc\") " pod="calico-system/calico-node-shb9d" Jan 13 20:10:01.743107 kubelet[2399]: I0113 20:10:01.741690 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5cddd1e2-6d9e-4325-852d-c943445266fc-var-lib-calico\") pod \"calico-node-shb9d\" (UID: \"5cddd1e2-6d9e-4325-852d-c943445266fc\") " pod="calico-system/calico-node-shb9d" Jan 13 20:10:01.743107 kubelet[2399]: I0113 20:10:01.741735 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5cddd1e2-6d9e-4325-852d-c943445266fc-cni-bin-dir\") pod \"calico-node-shb9d\" (UID: \"5cddd1e2-6d9e-4325-852d-c943445266fc\") " pod="calico-system/calico-node-shb9d" Jan 13 20:10:01.743107 kubelet[2399]: I0113 20:10:01.741789 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5cddd1e2-6d9e-4325-852d-c943445266fc-flexvol-driver-host\") pod \"calico-node-shb9d\" (UID: \"5cddd1e2-6d9e-4325-852d-c943445266fc\") " pod="calico-system/calico-node-shb9d" Jan 13 20:10:01.743427 kubelet[2399]: I0113 20:10:01.741842 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxfhc\" (UniqueName: \"kubernetes.io/projected/5cddd1e2-6d9e-4325-852d-c943445266fc-kube-api-access-xxfhc\") pod \"calico-node-shb9d\" (UID: \"5cddd1e2-6d9e-4325-852d-c943445266fc\") " pod="calico-system/calico-node-shb9d" Jan 13 20:10:01.743427 kubelet[2399]: I0113 20:10:01.741889 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsrv4\" (UniqueName: \"kubernetes.io/projected/46aeaaeb-d9ac-4b44-9de6-07a71c67ea74-kube-api-access-vsrv4\") pod \"kube-proxy-tkkr4\" (UID: \"46aeaaeb-d9ac-4b44-9de6-07a71c67ea74\") " pod="kube-system/kube-proxy-tkkr4" Jan 13 20:10:01.743427 kubelet[2399]: I0113 20:10:01.741933 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5cddd1e2-6d9e-4325-852d-c943445266fc-cni-net-dir\") pod \"calico-node-shb9d\" (UID: \"5cddd1e2-6d9e-4325-852d-c943445266fc\") " pod="calico-system/calico-node-shb9d" Jan 13 20:10:01.743427 kubelet[2399]: I0113 20:10:01.741970 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eaac7307-8099-4833-a054-611983b758f2-kubelet-dir\") pod \"csi-node-driver-57qj4\" (UID: \"eaac7307-8099-4833-a054-611983b758f2\") " pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:01.854517 kubelet[2399]: E0113 20:10:01.854356 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:01.854517 kubelet[2399]: W0113 20:10:01.854392 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:01.854517 kubelet[2399]: E0113 20:10:01.854427 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:01.874440 kubelet[2399]: E0113 20:10:01.873519 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:01.874440 kubelet[2399]: W0113 20:10:01.873558 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:01.874440 kubelet[2399]: E0113 20:10:01.873592 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:01.885518 kubelet[2399]: E0113 20:10:01.885322 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:01.885518 kubelet[2399]: W0113 20:10:01.885385 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:01.885518 kubelet[2399]: E0113 20:10:01.885429 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:01.895257 kubelet[2399]: E0113 20:10:01.894689 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:01.895257 kubelet[2399]: W0113 20:10:01.894726 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:01.895257 kubelet[2399]: E0113 20:10:01.894760 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:02.036191 containerd[1929]: time="2025-01-13T20:10:02.036112144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-shb9d,Uid:5cddd1e2-6d9e-4325-852d-c943445266fc,Namespace:calico-system,Attempt:0,}" Jan 13 20:10:02.050312 containerd[1929]: time="2025-01-13T20:10:02.049871429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tkkr4,Uid:46aeaaeb-d9ac-4b44-9de6-07a71c67ea74,Namespace:kube-system,Attempt:0,}" Jan 13 20:10:02.702111 kubelet[2399]: E0113 20:10:02.702052 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:02.792333 containerd[1929]: time="2025-01-13T20:10:02.792246188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:10:02.795614 containerd[1929]: time="2025-01-13T20:10:02.795540886Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 13 20:10:02.798261 containerd[1929]: time="2025-01-13T20:10:02.798153164Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:10:02.800187 containerd[1929]: time="2025-01-13T20:10:02.799897624Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:10:02.801962 containerd[1929]: time="2025-01-13T20:10:02.801912999Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:10:02.809764 containerd[1929]: time="2025-01-13T20:10:02.809523410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:10:02.817895 containerd[1929]: time="2025-01-13T20:10:02.816884325Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 766.895909ms" Jan 13 20:10:02.820543 containerd[1929]: time="2025-01-13T20:10:02.820485825Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 784.225082ms" Jan 13 20:10:02.856734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2044149135.mount: Deactivated successfully. Jan 13 20:10:03.024818 containerd[1929]: time="2025-01-13T20:10:03.024366015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:10:03.024818 containerd[1929]: time="2025-01-13T20:10:03.024498021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:10:03.024818 containerd[1929]: time="2025-01-13T20:10:03.024527460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:03.025441 containerd[1929]: time="2025-01-13T20:10:03.024712725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:03.037018 containerd[1929]: time="2025-01-13T20:10:03.036594439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:10:03.037018 containerd[1929]: time="2025-01-13T20:10:03.036702396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:10:03.037018 containerd[1929]: time="2025-01-13T20:10:03.036738234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:03.037018 containerd[1929]: time="2025-01-13T20:10:03.036870805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:03.207563 systemd[1]: Started cri-containerd-4d13d5648542396d504f957bb0de1e9960841d8962b7ee8510381fe60cbef0ba.scope - libcontainer container 4d13d5648542396d504f957bb0de1e9960841d8962b7ee8510381fe60cbef0ba. Jan 13 20:10:03.211776 systemd[1]: Started cri-containerd-c206e1e91968a7e73cf3356148667095fd7416ccd4396a91e1f32d0b7564e707.scope - libcontainer container c206e1e91968a7e73cf3356148667095fd7416ccd4396a91e1f32d0b7564e707. Jan 13 20:10:03.268916 containerd[1929]: time="2025-01-13T20:10:03.268829968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tkkr4,Uid:46aeaaeb-d9ac-4b44-9de6-07a71c67ea74,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d13d5648542396d504f957bb0de1e9960841d8962b7ee8510381fe60cbef0ba\"" Jan 13 20:10:03.275091 containerd[1929]: time="2025-01-13T20:10:03.274805522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-shb9d,Uid:5cddd1e2-6d9e-4325-852d-c943445266fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"c206e1e91968a7e73cf3356148667095fd7416ccd4396a91e1f32d0b7564e707\"" Jan 13 20:10:03.277516 containerd[1929]: time="2025-01-13T20:10:03.276668601Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 20:10:03.703231 kubelet[2399]: E0113 20:10:03.703036 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:03.847974 kubelet[2399]: E0113 20:10:03.847540 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57qj4" podUID="eaac7307-8099-4833-a054-611983b758f2" Jan 13 20:10:04.646695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount58177239.mount: Deactivated successfully. Jan 13 20:10:04.704507 kubelet[2399]: E0113 20:10:04.704225 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:05.129029 containerd[1929]: time="2025-01-13T20:10:05.128969194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:05.130919 containerd[1929]: time="2025-01-13T20:10:05.130861519Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662011" Jan 13 20:10:05.131586 containerd[1929]: time="2025-01-13T20:10:05.131548611Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:05.136088 containerd[1929]: time="2025-01-13T20:10:05.136039383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:05.137389 containerd[1929]: time="2025-01-13T20:10:05.137332237Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.860037931s" Jan 13 20:10:05.137513 containerd[1929]: time="2025-01-13T20:10:05.137386156Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Jan 13 20:10:05.140087 containerd[1929]: time="2025-01-13T20:10:05.140021809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 20:10:05.142594 containerd[1929]: time="2025-01-13T20:10:05.142541665Z" level=info msg="CreateContainer within sandbox \"4d13d5648542396d504f957bb0de1e9960841d8962b7ee8510381fe60cbef0ba\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:10:05.169357 containerd[1929]: time="2025-01-13T20:10:05.169295751Z" level=info msg="CreateContainer within sandbox \"4d13d5648542396d504f957bb0de1e9960841d8962b7ee8510381fe60cbef0ba\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b21d0cf91fce46c76945ad0ab372a2878d2c93f6988e35c16daa14f865d1d587\"" Jan 13 20:10:05.170666 containerd[1929]: time="2025-01-13T20:10:05.170585123Z" level=info msg="StartContainer for \"b21d0cf91fce46c76945ad0ab372a2878d2c93f6988e35c16daa14f865d1d587\"" Jan 13 20:10:05.233529 systemd[1]: Started cri-containerd-b21d0cf91fce46c76945ad0ab372a2878d2c93f6988e35c16daa14f865d1d587.scope - libcontainer container b21d0cf91fce46c76945ad0ab372a2878d2c93f6988e35c16daa14f865d1d587. Jan 13 20:10:05.289126 containerd[1929]: time="2025-01-13T20:10:05.289055146Z" level=info msg="StartContainer for \"b21d0cf91fce46c76945ad0ab372a2878d2c93f6988e35c16daa14f865d1d587\" returns successfully" Jan 13 20:10:05.704669 kubelet[2399]: E0113 20:10:05.704594 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:05.847969 kubelet[2399]: E0113 20:10:05.847446 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57qj4" podUID="eaac7307-8099-4833-a054-611983b758f2" Jan 13 20:10:05.980445 kubelet[2399]: E0113 20:10:05.980309 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:05.980445 kubelet[2399]: W0113 20:10:05.980346 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:05.980445 kubelet[2399]: E0113 20:10:05.980402 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:05.981112 kubelet[2399]: E0113 20:10:05.980833 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:05.981112 kubelet[2399]: W0113 20:10:05.980862 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:05.981112 kubelet[2399]: E0113 20:10:05.980886 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:05.981317 kubelet[2399]: E0113 20:10:05.981281 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:05.981391 kubelet[2399]: W0113 20:10:05.981321 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:05.981391 kubelet[2399]: E0113 20:10:05.981346 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:05.981766 kubelet[2399]: E0113 20:10:05.981722 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:05.981766 kubelet[2399]: W0113 20:10:05.981752 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:05.981887 kubelet[2399]: E0113 20:10:05.981774 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:05.982168 kubelet[2399]: E0113 20:10:05.982129 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:05.982262 kubelet[2399]: W0113 20:10:05.982177 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:05.982262 kubelet[2399]: E0113 20:10:05.982200 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:05.982652 kubelet[2399]: E0113 20:10:05.982614 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:05.982652 kubelet[2399]: W0113 20:10:05.982642 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:05.982771 kubelet[2399]: E0113 20:10:05.982685 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:05.983078 kubelet[2399]: E0113 20:10:05.983038 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:05.983078 kubelet[2399]: W0113 20:10:05.983067 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:05.983197 kubelet[2399]: E0113 20:10:05.983088 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:05.983486 kubelet[2399]: E0113 20:10:05.983459 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:05.983545 kubelet[2399]: W0113 20:10:05.983484 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:05.983545 kubelet[2399]: E0113 20:10:05.983529 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:05.983938 kubelet[2399]: E0113 20:10:05.983912 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:05.984000 kubelet[2399]: W0113 20:10:05.983937 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:05.984000 kubelet[2399]: E0113 20:10:05.983958 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:05.984353 kubelet[2399]: E0113 20:10:05.984326 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:05.984416 kubelet[2399]: W0113 20:10:05.984351 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:05.984416 kubelet[2399]: E0113 20:10:05.984385 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:05.984707 kubelet[2399]: E0113 20:10:05.984682 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:05.984774 kubelet[2399]: W0113 20:10:05.984708 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:05.984774 kubelet[2399]: E0113 20:10:05.984752 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:05.985112 kubelet[2399]: E0113 20:10:05.985085 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:05.985174 kubelet[2399]: W0113 20:10:05.985110 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:05.985174 kubelet[2399]: E0113 20:10:05.985157 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:05.985599 kubelet[2399]: E0113 20:10:05.985572 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:05.985663 kubelet[2399]: W0113 20:10:05.985597 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:05.985663 kubelet[2399]: E0113 20:10:05.985619 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:05.985947 kubelet[2399]: E0113 20:10:05.985922 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:05.986005 kubelet[2399]: W0113 20:10:05.985945 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:05.986005 kubelet[2399]: E0113 20:10:05.985965 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:05.986284 kubelet[2399]: E0113 20:10:05.986260 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:05.986344 kubelet[2399]: W0113 20:10:05.986284 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:05.986344 kubelet[2399]: E0113 20:10:05.986304 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:05.986602 kubelet[2399]: E0113 20:10:05.986578 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:05.986674 kubelet[2399]: W0113 20:10:05.986601 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:05.986674 kubelet[2399]: E0113 20:10:05.986621 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:05.986934 kubelet[2399]: E0113 20:10:05.986910 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:05.986993 kubelet[2399]: W0113 20:10:05.986933 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:05.986993 kubelet[2399]: E0113 20:10:05.986952 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:05.987364 kubelet[2399]: E0113 20:10:05.987336 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:05.987430 kubelet[2399]: W0113 20:10:05.987362 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:05.987430 kubelet[2399]: E0113 20:10:05.987385 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:05.987705 kubelet[2399]: E0113 20:10:05.987680 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:05.987772 kubelet[2399]: W0113 20:10:05.987703 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:05.987772 kubelet[2399]: E0113 20:10:05.987723 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:05.988023 kubelet[2399]: E0113 20:10:05.987999 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:05.988081 kubelet[2399]: W0113 20:10:05.988022 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:05.988081 kubelet[2399]: E0113 20:10:05.988042 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.080745 kubelet[2399]: E0113 20:10:06.080690 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.080745 kubelet[2399]: W0113 20:10:06.080730 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.080921 kubelet[2399]: E0113 20:10:06.080761 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.081145 kubelet[2399]: E0113 20:10:06.081116 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.081236 kubelet[2399]: W0113 20:10:06.081143 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.081236 kubelet[2399]: E0113 20:10:06.081174 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.081637 kubelet[2399]: E0113 20:10:06.081596 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.081637 kubelet[2399]: W0113 20:10:06.081626 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.081772 kubelet[2399]: E0113 20:10:06.081665 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.081984 kubelet[2399]: E0113 20:10:06.081959 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.082043 kubelet[2399]: W0113 20:10:06.081984 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.082043 kubelet[2399]: E0113 20:10:06.082022 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.082366 kubelet[2399]: E0113 20:10:06.082341 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.082431 kubelet[2399]: W0113 20:10:06.082365 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.082431 kubelet[2399]: E0113 20:10:06.082395 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.082780 kubelet[2399]: E0113 20:10:06.082753 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.082877 kubelet[2399]: W0113 20:10:06.082779 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.083350 kubelet[2399]: E0113 20:10:06.082952 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.083350 kubelet[2399]: E0113 20:10:06.083083 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.083350 kubelet[2399]: W0113 20:10:06.083099 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.083350 kubelet[2399]: E0113 20:10:06.083120 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.083592 kubelet[2399]: E0113 20:10:06.083462 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.083592 kubelet[2399]: W0113 20:10:06.083480 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.083592 kubelet[2399]: E0113 20:10:06.083523 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.083846 kubelet[2399]: E0113 20:10:06.083821 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.083913 kubelet[2399]: W0113 20:10:06.083845 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.083913 kubelet[2399]: E0113 20:10:06.083888 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.084329 kubelet[2399]: E0113 20:10:06.084302 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.084415 kubelet[2399]: W0113 20:10:06.084328 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.084415 kubelet[2399]: E0113 20:10:06.084369 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.085039 kubelet[2399]: E0113 20:10:06.084993 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.085039 kubelet[2399]: W0113 20:10:06.085025 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.085173 kubelet[2399]: E0113 20:10:06.085059 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.085471 kubelet[2399]: E0113 20:10:06.085444 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.085532 kubelet[2399]: W0113 20:10:06.085470 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.085532 kubelet[2399]: E0113 20:10:06.085492 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.705143 kubelet[2399]: E0113 20:10:06.705083 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:06.893824 kubelet[2399]: E0113 20:10:06.893779 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.893824 kubelet[2399]: W0113 20:10:06.893813 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.894155 kubelet[2399]: E0113 20:10:06.893843 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.894267 kubelet[2399]: E0113 20:10:06.894159 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.894267 kubelet[2399]: W0113 20:10:06.894176 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.894267 kubelet[2399]: E0113 20:10:06.894197 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.894565 kubelet[2399]: E0113 20:10:06.894510 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.894565 kubelet[2399]: W0113 20:10:06.894526 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.894565 kubelet[2399]: E0113 20:10:06.894544 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.894861 kubelet[2399]: E0113 20:10:06.894823 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.894861 kubelet[2399]: W0113 20:10:06.894838 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.894861 kubelet[2399]: E0113 20:10:06.894856 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.895171 kubelet[2399]: E0113 20:10:06.895140 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.895171 kubelet[2399]: W0113 20:10:06.895164 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.895380 kubelet[2399]: E0113 20:10:06.895185 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.895512 kubelet[2399]: E0113 20:10:06.895498 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.895626 kubelet[2399]: W0113 20:10:06.895514 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.895626 kubelet[2399]: E0113 20:10:06.895534 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.895848 kubelet[2399]: E0113 20:10:06.895819 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.895848 kubelet[2399]: W0113 20:10:06.895835 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.895990 kubelet[2399]: E0113 20:10:06.895853 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.896152 kubelet[2399]: E0113 20:10:06.896122 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.896152 kubelet[2399]: W0113 20:10:06.896146 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.896350 kubelet[2399]: E0113 20:10:06.896167 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.896525 kubelet[2399]: E0113 20:10:06.896496 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.896525 kubelet[2399]: W0113 20:10:06.896521 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.896678 kubelet[2399]: E0113 20:10:06.896542 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.896852 kubelet[2399]: E0113 20:10:06.896818 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.896852 kubelet[2399]: W0113 20:10:06.896844 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.897009 kubelet[2399]: E0113 20:10:06.896864 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.897160 kubelet[2399]: E0113 20:10:06.897133 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.897160 kubelet[2399]: W0113 20:10:06.897148 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.897361 kubelet[2399]: E0113 20:10:06.897167 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.897515 kubelet[2399]: E0113 20:10:06.897478 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.897515 kubelet[2399]: W0113 20:10:06.897503 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.897679 kubelet[2399]: E0113 20:10:06.897523 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.897843 kubelet[2399]: E0113 20:10:06.897813 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.897843 kubelet[2399]: W0113 20:10:06.897837 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.898014 kubelet[2399]: E0113 20:10:06.897859 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.898156 kubelet[2399]: E0113 20:10:06.898121 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.898156 kubelet[2399]: W0113 20:10:06.898144 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.898339 kubelet[2399]: E0113 20:10:06.898167 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.898519 kubelet[2399]: E0113 20:10:06.898493 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.898519 kubelet[2399]: W0113 20:10:06.898518 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.898666 kubelet[2399]: E0113 20:10:06.898539 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.898841 kubelet[2399]: E0113 20:10:06.898805 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.898841 kubelet[2399]: W0113 20:10:06.898828 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.898988 kubelet[2399]: E0113 20:10:06.898849 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.899145 kubelet[2399]: E0113 20:10:06.899120 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.899227 kubelet[2399]: W0113 20:10:06.899145 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.899227 kubelet[2399]: E0113 20:10:06.899165 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.899574 kubelet[2399]: E0113 20:10:06.899546 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.899638 kubelet[2399]: W0113 20:10:06.899575 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.899638 kubelet[2399]: E0113 20:10:06.899597 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.899899 kubelet[2399]: E0113 20:10:06.899873 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.899965 kubelet[2399]: W0113 20:10:06.899897 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.899965 kubelet[2399]: E0113 20:10:06.899917 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.900222 kubelet[2399]: E0113 20:10:06.900194 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.900279 kubelet[2399]: W0113 20:10:06.900229 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.900279 kubelet[2399]: E0113 20:10:06.900250 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.986995 kubelet[2399]: E0113 20:10:06.986569 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.986995 kubelet[2399]: W0113 20:10:06.986599 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.986995 kubelet[2399]: E0113 20:10:06.986628 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.987399 kubelet[2399]: E0113 20:10:06.987046 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.987399 kubelet[2399]: W0113 20:10:06.987065 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.987399 kubelet[2399]: E0113 20:10:06.987101 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.989346 kubelet[2399]: E0113 20:10:06.987654 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.989346 kubelet[2399]: W0113 20:10:06.987675 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.989346 kubelet[2399]: E0113 20:10:06.987708 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.989346 kubelet[2399]: E0113 20:10:06.988254 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.989346 kubelet[2399]: W0113 20:10:06.988273 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.989346 kubelet[2399]: E0113 20:10:06.988475 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.989346 kubelet[2399]: E0113 20:10:06.988907 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.989346 kubelet[2399]: W0113 20:10:06.988924 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.989346 kubelet[2399]: E0113 20:10:06.988967 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.989952 kubelet[2399]: E0113 20:10:06.989396 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.989952 kubelet[2399]: W0113 20:10:06.989416 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.989952 kubelet[2399]: E0113 20:10:06.989455 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.989952 kubelet[2399]: E0113 20:10:06.989766 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.989952 kubelet[2399]: W0113 20:10:06.989782 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.989952 kubelet[2399]: E0113 20:10:06.989926 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.990297 kubelet[2399]: E0113 20:10:06.990127 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.990297 kubelet[2399]: W0113 20:10:06.990142 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.990297 kubelet[2399]: E0113 20:10:06.990167 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.992548 kubelet[2399]: E0113 20:10:06.990621 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.992548 kubelet[2399]: W0113 20:10:06.990661 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.992548 kubelet[2399]: E0113 20:10:06.990747 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.992548 kubelet[2399]: E0113 20:10:06.991815 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.992548 kubelet[2399]: W0113 20:10:06.991837 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.992548 kubelet[2399]: E0113 20:10:06.991870 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.993198 kubelet[2399]: E0113 20:10:06.992716 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.993198 kubelet[2399]: W0113 20:10:06.992739 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.993198 kubelet[2399]: E0113 20:10:06.992774 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:06.993198 kubelet[2399]: E0113 20:10:06.993138 2399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:10:06.993198 kubelet[2399]: W0113 20:10:06.993157 2399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:10:06.993198 kubelet[2399]: E0113 20:10:06.993178 2399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:10:07.705764 kubelet[2399]: E0113 20:10:07.705710 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:07.847419 kubelet[2399]: E0113 20:10:07.846904 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57qj4" podUID="eaac7307-8099-4833-a054-611983b758f2" Jan 13 20:10:08.114370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount255253020.mount: Deactivated successfully. Jan 13 20:10:08.244528 containerd[1929]: time="2025-01-13T20:10:08.244454179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:08.245993 containerd[1929]: time="2025-01-13T20:10:08.245911468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Jan 13 20:10:08.248813 containerd[1929]: time="2025-01-13T20:10:08.248745700Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:08.252709 containerd[1929]: time="2025-01-13T20:10:08.252647722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:08.255160 containerd[1929]: time="2025-01-13T20:10:08.254166397Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 3.114081448s" Jan 13 20:10:08.255160 containerd[1929]: time="2025-01-13T20:10:08.254251003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 13 20:10:08.257998 containerd[1929]: time="2025-01-13T20:10:08.257928813Z" level=info msg="CreateContainer within sandbox \"c206e1e91968a7e73cf3356148667095fd7416ccd4396a91e1f32d0b7564e707\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:10:08.285674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4028877889.mount: Deactivated successfully. Jan 13 20:10:08.294933 containerd[1929]: time="2025-01-13T20:10:08.294854899Z" level=info msg="CreateContainer within sandbox \"c206e1e91968a7e73cf3356148667095fd7416ccd4396a91e1f32d0b7564e707\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"05f56d9e5f39678a3beffe5d692a5d3dffdf3aac56d976f3487f6f25d7c34ac6\"" Jan 13 20:10:08.296229 containerd[1929]: time="2025-01-13T20:10:08.296016444Z" level=info msg="StartContainer for \"05f56d9e5f39678a3beffe5d692a5d3dffdf3aac56d976f3487f6f25d7c34ac6\"" Jan 13 20:10:08.347518 systemd[1]: Started cri-containerd-05f56d9e5f39678a3beffe5d692a5d3dffdf3aac56d976f3487f6f25d7c34ac6.scope - libcontainer container 05f56d9e5f39678a3beffe5d692a5d3dffdf3aac56d976f3487f6f25d7c34ac6. Jan 13 20:10:08.403298 containerd[1929]: time="2025-01-13T20:10:08.401838241Z" level=info msg="StartContainer for \"05f56d9e5f39678a3beffe5d692a5d3dffdf3aac56d976f3487f6f25d7c34ac6\" returns successfully" Jan 13 20:10:08.421683 systemd[1]: cri-containerd-05f56d9e5f39678a3beffe5d692a5d3dffdf3aac56d976f3487f6f25d7c34ac6.scope: Deactivated successfully. Jan 13 20:10:08.610931 containerd[1929]: time="2025-01-13T20:10:08.610473420Z" level=info msg="shim disconnected" id=05f56d9e5f39678a3beffe5d692a5d3dffdf3aac56d976f3487f6f25d7c34ac6 namespace=k8s.io Jan 13 20:10:08.610931 containerd[1929]: time="2025-01-13T20:10:08.610830646Z" level=warning msg="cleaning up after shim disconnected" id=05f56d9e5f39678a3beffe5d692a5d3dffdf3aac56d976f3487f6f25d7c34ac6 namespace=k8s.io Jan 13 20:10:08.610931 containerd[1929]: time="2025-01-13T20:10:08.610853254Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:10:08.706402 kubelet[2399]: E0113 20:10:08.706241 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:08.889656 containerd[1929]: time="2025-01-13T20:10:08.889557946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 20:10:08.929522 kubelet[2399]: I0113 20:10:08.929400 2399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tkkr4" podStartSLOduration=7.062606798 podStartE2EDuration="8.929380732s" podCreationTimestamp="2025-01-13 20:10:00 +0000 UTC" firstStartedPulling="2025-01-13 20:10:03.273017156 +0000 UTC m=+5.142946492" lastFinishedPulling="2025-01-13 20:10:05.13979109 +0000 UTC m=+7.009720426" observedRunningTime="2025-01-13 20:10:05.946385007 +0000 UTC m=+7.816314391" watchObservedRunningTime="2025-01-13 20:10:08.929380732 +0000 UTC m=+10.799310092" Jan 13 20:10:09.073969 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05f56d9e5f39678a3beffe5d692a5d3dffdf3aac56d976f3487f6f25d7c34ac6-rootfs.mount: Deactivated successfully. Jan 13 20:10:09.707150 kubelet[2399]: E0113 20:10:09.707100 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:09.848045 kubelet[2399]: E0113 20:10:09.847633 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57qj4" podUID="eaac7307-8099-4833-a054-611983b758f2" Jan 13 20:10:10.708199 kubelet[2399]: E0113 20:10:10.708154 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:11.708810 kubelet[2399]: E0113 20:10:11.708743 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:11.848023 kubelet[2399]: E0113 20:10:11.847568 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57qj4" podUID="eaac7307-8099-4833-a054-611983b758f2" Jan 13 20:10:12.709644 kubelet[2399]: E0113 20:10:12.709532 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:13.710128 kubelet[2399]: E0113 20:10:13.709847 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:13.848091 kubelet[2399]: E0113 20:10:13.847086 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57qj4" podUID="eaac7307-8099-4833-a054-611983b758f2" Jan 13 20:10:13.977365 containerd[1929]: time="2025-01-13T20:10:13.977198727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:13.979179 containerd[1929]: time="2025-01-13T20:10:13.978952119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 13 20:10:13.980870 containerd[1929]: time="2025-01-13T20:10:13.980788593Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:13.984536 containerd[1929]: time="2025-01-13T20:10:13.984461048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:13.986524 containerd[1929]: time="2025-01-13T20:10:13.985984922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 5.096367185s" Jan 13 20:10:13.986524 containerd[1929]: time="2025-01-13T20:10:13.986037088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 13 20:10:13.989897 containerd[1929]: time="2025-01-13T20:10:13.989663428Z" level=info msg="CreateContainer within sandbox \"c206e1e91968a7e73cf3356148667095fd7416ccd4396a91e1f32d0b7564e707\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:10:14.012556 containerd[1929]: time="2025-01-13T20:10:14.012495875Z" level=info msg="CreateContainer within sandbox \"c206e1e91968a7e73cf3356148667095fd7416ccd4396a91e1f32d0b7564e707\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"194b0bcd2044c204d1fa79af1c45cb074dd80b2adf8ae6a042e9edd42e824ebc\"" Jan 13 20:10:14.013636 containerd[1929]: time="2025-01-13T20:10:14.013589081Z" level=info msg="StartContainer for \"194b0bcd2044c204d1fa79af1c45cb074dd80b2adf8ae6a042e9edd42e824ebc\"" Jan 13 20:10:14.073537 systemd[1]: Started cri-containerd-194b0bcd2044c204d1fa79af1c45cb074dd80b2adf8ae6a042e9edd42e824ebc.scope - libcontainer container 194b0bcd2044c204d1fa79af1c45cb074dd80b2adf8ae6a042e9edd42e824ebc. Jan 13 20:10:14.129827 containerd[1929]: time="2025-01-13T20:10:14.129744575Z" level=info msg="StartContainer for \"194b0bcd2044c204d1fa79af1c45cb074dd80b2adf8ae6a042e9edd42e824ebc\" returns successfully" Jan 13 20:10:14.711142 kubelet[2399]: E0113 20:10:14.711048 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:14.992817 systemd[1]: cri-containerd-194b0bcd2044c204d1fa79af1c45cb074dd80b2adf8ae6a042e9edd42e824ebc.scope: Deactivated successfully. Jan 13 20:10:15.037766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-194b0bcd2044c204d1fa79af1c45cb074dd80b2adf8ae6a042e9edd42e824ebc-rootfs.mount: Deactivated successfully. Jan 13 20:10:15.076820 kubelet[2399]: I0113 20:10:15.076720 2399 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:10:15.711411 kubelet[2399]: E0113 20:10:15.711353 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:15.858450 systemd[1]: Created slice kubepods-besteffort-podeaac7307_8099_4833_a054_611983b758f2.slice - libcontainer container kubepods-besteffort-podeaac7307_8099_4833_a054_611983b758f2.slice. Jan 13 20:10:15.863239 containerd[1929]: time="2025-01-13T20:10:15.863134261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:0,}" Jan 13 20:10:15.952964 containerd[1929]: time="2025-01-13T20:10:15.952609997Z" level=info msg="shim disconnected" id=194b0bcd2044c204d1fa79af1c45cb074dd80b2adf8ae6a042e9edd42e824ebc namespace=k8s.io Jan 13 20:10:15.952964 containerd[1929]: time="2025-01-13T20:10:15.952727896Z" level=warning msg="cleaning up after shim disconnected" id=194b0bcd2044c204d1fa79af1c45cb074dd80b2adf8ae6a042e9edd42e824ebc namespace=k8s.io Jan 13 20:10:15.952964 containerd[1929]: time="2025-01-13T20:10:15.952747874Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:10:16.071737 containerd[1929]: time="2025-01-13T20:10:16.071677257Z" level=error msg="Failed to destroy network for sandbox \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:16.074524 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987-shm.mount: Deactivated successfully. Jan 13 20:10:16.075344 containerd[1929]: time="2025-01-13T20:10:16.075081955Z" level=error msg="encountered an error cleaning up failed sandbox \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:16.075344 containerd[1929]: time="2025-01-13T20:10:16.075273691Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:16.076523 kubelet[2399]: E0113 20:10:16.076231 2399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:16.076523 kubelet[2399]: E0113 20:10:16.076396 2399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:16.076523 kubelet[2399]: E0113 20:10:16.076461 2399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:16.076921 kubelet[2399]: E0113 20:10:16.076801 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-57qj4_calico-system(eaac7307-8099-4833-a054-611983b758f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-57qj4_calico-system(eaac7307-8099-4833-a054-611983b758f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-57qj4" podUID="eaac7307-8099-4833-a054-611983b758f2" Jan 13 20:10:16.712507 kubelet[2399]: E0113 20:10:16.712437 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:16.921541 kubelet[2399]: I0113 20:10:16.920696 2399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987" Jan 13 20:10:16.922684 containerd[1929]: time="2025-01-13T20:10:16.922135588Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\"" Jan 13 20:10:16.922684 containerd[1929]: time="2025-01-13T20:10:16.922430767Z" level=info msg="Ensure that sandbox d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987 in task-service has been cleanup successfully" Jan 13 20:10:16.926058 containerd[1929]: time="2025-01-13T20:10:16.925825992Z" level=info msg="TearDown network for sandbox \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" successfully" Jan 13 20:10:16.926058 containerd[1929]: time="2025-01-13T20:10:16.925880764Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" returns successfully" Jan 13 20:10:16.928105 containerd[1929]: time="2025-01-13T20:10:16.927653533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:1,}" Jan 13 20:10:16.927653 systemd[1]: run-netns-cni\x2d2bc1f1c2\x2d2400\x2d6282\x2d3bdf\x2d841489b12f6f.mount: Deactivated successfully. Jan 13 20:10:16.931367 containerd[1929]: time="2025-01-13T20:10:16.930998321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 20:10:17.052909 containerd[1929]: time="2025-01-13T20:10:17.052765059Z" level=error msg="Failed to destroy network for sandbox \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:17.053424 containerd[1929]: time="2025-01-13T20:10:17.053354770Z" level=error msg="encountered an error cleaning up failed sandbox \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:17.054429 containerd[1929]: time="2025-01-13T20:10:17.053455488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:17.054616 kubelet[2399]: E0113 20:10:17.054434 2399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:17.054616 kubelet[2399]: E0113 20:10:17.054509 2399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:17.054616 kubelet[2399]: E0113 20:10:17.054544 2399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:17.055314 kubelet[2399]: E0113 20:10:17.054620 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-57qj4_calico-system(eaac7307-8099-4833-a054-611983b758f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-57qj4_calico-system(eaac7307-8099-4833-a054-611983b758f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-57qj4" podUID="eaac7307-8099-4833-a054-611983b758f2" Jan 13 20:10:17.056838 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8-shm.mount: Deactivated successfully. Jan 13 20:10:17.713562 kubelet[2399]: E0113 20:10:17.713489 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:17.930435 kubelet[2399]: I0113 20:10:17.930395 2399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8" Jan 13 20:10:17.931332 containerd[1929]: time="2025-01-13T20:10:17.931246906Z" level=info msg="StopPodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\"" Jan 13 20:10:17.932675 containerd[1929]: time="2025-01-13T20:10:17.932385952Z" level=info msg="Ensure that sandbox ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8 in task-service has been cleanup successfully" Jan 13 20:10:17.932903 containerd[1929]: time="2025-01-13T20:10:17.932771704Z" level=info msg="TearDown network for sandbox \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" successfully" Jan 13 20:10:17.935250 containerd[1929]: time="2025-01-13T20:10:17.932799030Z" level=info msg="StopPodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" returns successfully" Jan 13 20:10:17.935180 systemd[1]: run-netns-cni\x2d6cacd01f\x2d0c4b\x2de6b7\x2de0d3\x2df3bcca44053a.mount: Deactivated successfully. Jan 13 20:10:17.937836 containerd[1929]: time="2025-01-13T20:10:17.936415862Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\"" Jan 13 20:10:17.937836 containerd[1929]: time="2025-01-13T20:10:17.936587247Z" level=info msg="TearDown network for sandbox \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" successfully" Jan 13 20:10:17.937836 containerd[1929]: time="2025-01-13T20:10:17.936609206Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" returns successfully" Jan 13 20:10:17.939520 containerd[1929]: time="2025-01-13T20:10:17.939468891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:2,}" Jan 13 20:10:18.051085 containerd[1929]: time="2025-01-13T20:10:18.051006937Z" level=error msg="Failed to destroy network for sandbox \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:18.053758 containerd[1929]: time="2025-01-13T20:10:18.053663529Z" level=error msg="encountered an error cleaning up failed sandbox \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:18.054089 containerd[1929]: time="2025-01-13T20:10:18.053779183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:18.054690 kubelet[2399]: E0113 20:10:18.054459 2399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:18.054690 kubelet[2399]: E0113 20:10:18.054561 2399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:18.054690 kubelet[2399]: E0113 20:10:18.054617 2399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:18.055420 kubelet[2399]: E0113 20:10:18.054977 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-57qj4_calico-system(eaac7307-8099-4833-a054-611983b758f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-57qj4_calico-system(eaac7307-8099-4833-a054-611983b758f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-57qj4" podUID="eaac7307-8099-4833-a054-611983b758f2" Jan 13 20:10:18.055156 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b-shm.mount: Deactivated successfully. Jan 13 20:10:18.714067 kubelet[2399]: E0113 20:10:18.713994 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:18.936876 kubelet[2399]: I0113 20:10:18.936835 2399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b" Jan 13 20:10:18.938360 containerd[1929]: time="2025-01-13T20:10:18.938260309Z" level=info msg="StopPodSandbox for \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\"" Jan 13 20:10:18.940565 containerd[1929]: time="2025-01-13T20:10:18.938559726Z" level=info msg="Ensure that sandbox f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b in task-service has been cleanup successfully" Jan 13 20:10:18.941921 containerd[1929]: time="2025-01-13T20:10:18.941311045Z" level=info msg="TearDown network for sandbox \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\" successfully" Jan 13 20:10:18.941921 containerd[1929]: time="2025-01-13T20:10:18.941353102Z" level=info msg="StopPodSandbox for \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\" returns successfully" Jan 13 20:10:18.942113 systemd[1]: run-netns-cni\x2d1f8e1a93\x2d48fd\x2d4708\x2d4e4f\x2dcbc4477c9828.mount: Deactivated successfully. Jan 13 20:10:18.943327 containerd[1929]: time="2025-01-13T20:10:18.942411167Z" level=info msg="StopPodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\"" Jan 13 20:10:18.943327 containerd[1929]: time="2025-01-13T20:10:18.942561866Z" level=info msg="TearDown network for sandbox \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" successfully" Jan 13 20:10:18.943327 containerd[1929]: time="2025-01-13T20:10:18.942584029Z" level=info msg="StopPodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" returns successfully" Jan 13 20:10:18.946251 containerd[1929]: time="2025-01-13T20:10:18.945551516Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\"" Jan 13 20:10:18.946251 containerd[1929]: time="2025-01-13T20:10:18.945729265Z" level=info msg="TearDown network for sandbox \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" successfully" Jan 13 20:10:18.946251 containerd[1929]: time="2025-01-13T20:10:18.945753013Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" returns successfully" Jan 13 20:10:18.947842 containerd[1929]: time="2025-01-13T20:10:18.947119176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:3,}" Jan 13 20:10:18.987003 kubelet[2399]: I0113 20:10:18.986864 2399 topology_manager.go:215] "Topology Admit Handler" podUID="95bdcc03-7517-44ab-a81e-a674bfc95b9f" podNamespace="default" podName="nginx-deployment-85f456d6dd-7xnjr" Jan 13 20:10:19.005793 systemd[1]: Created slice kubepods-besteffort-pod95bdcc03_7517_44ab_a81e_a674bfc95b9f.slice - libcontainer container kubepods-besteffort-pod95bdcc03_7517_44ab_a81e_a674bfc95b9f.slice. Jan 13 20:10:19.099771 containerd[1929]: time="2025-01-13T20:10:19.099489652Z" level=error msg="Failed to destroy network for sandbox \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:19.102869 containerd[1929]: time="2025-01-13T20:10:19.102514192Z" level=error msg="encountered an error cleaning up failed sandbox \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:19.102869 containerd[1929]: time="2025-01-13T20:10:19.102621105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:19.103445 kubelet[2399]: E0113 20:10:19.103198 2399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:19.103445 kubelet[2399]: E0113 20:10:19.103371 2399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:19.103620 kubelet[2399]: E0113 20:10:19.103410 2399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:19.103857 kubelet[2399]: E0113 20:10:19.103791 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-57qj4_calico-system(eaac7307-8099-4833-a054-611983b758f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-57qj4_calico-system(eaac7307-8099-4833-a054-611983b758f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-57qj4" podUID="eaac7307-8099-4833-a054-611983b758f2" Jan 13 20:10:19.104886 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03-shm.mount: Deactivated successfully. Jan 13 20:10:19.159494 kubelet[2399]: I0113 20:10:19.159445 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccx22\" (UniqueName: \"kubernetes.io/projected/95bdcc03-7517-44ab-a81e-a674bfc95b9f-kube-api-access-ccx22\") pod \"nginx-deployment-85f456d6dd-7xnjr\" (UID: \"95bdcc03-7517-44ab-a81e-a674bfc95b9f\") " pod="default/nginx-deployment-85f456d6dd-7xnjr" Jan 13 20:10:19.315430 containerd[1929]: time="2025-01-13T20:10:19.315315537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-7xnjr,Uid:95bdcc03-7517-44ab-a81e-a674bfc95b9f,Namespace:default,Attempt:0,}" Jan 13 20:10:19.437347 containerd[1929]: time="2025-01-13T20:10:19.437282450Z" level=error msg="Failed to destroy network for sandbox \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:19.437890 containerd[1929]: time="2025-01-13T20:10:19.437844835Z" level=error msg="encountered an error cleaning up failed sandbox \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:19.437985 containerd[1929]: time="2025-01-13T20:10:19.437936129Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-7xnjr,Uid:95bdcc03-7517-44ab-a81e-a674bfc95b9f,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:19.438581 kubelet[2399]: E0113 20:10:19.438434 2399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:19.439144 kubelet[2399]: E0113 20:10:19.438546 2399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-7xnjr" Jan 13 20:10:19.439144 kubelet[2399]: E0113 20:10:19.438745 2399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-7xnjr" Jan 13 20:10:19.439144 kubelet[2399]: E0113 20:10:19.438850 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-7xnjr_default(95bdcc03-7517-44ab-a81e-a674bfc95b9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-7xnjr_default(95bdcc03-7517-44ab-a81e-a674bfc95b9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-7xnjr" podUID="95bdcc03-7517-44ab-a81e-a674bfc95b9f" Jan 13 20:10:19.700438 kubelet[2399]: E0113 20:10:19.700283 2399 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:19.714764 kubelet[2399]: E0113 20:10:19.714696 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:19.918183 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 20:10:19.951348 kubelet[2399]: I0113 20:10:19.951132 2399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03" Jan 13 20:10:19.953864 containerd[1929]: time="2025-01-13T20:10:19.953780502Z" level=info msg="StopPodSandbox for \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\"" Jan 13 20:10:19.955083 containerd[1929]: time="2025-01-13T20:10:19.954098673Z" level=info msg="Ensure that sandbox 6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03 in task-service has been cleanup successfully" Jan 13 20:10:19.957729 containerd[1929]: time="2025-01-13T20:10:19.957431298Z" level=info msg="TearDown network for sandbox \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\" successfully" Jan 13 20:10:19.957729 containerd[1929]: time="2025-01-13T20:10:19.957526938Z" level=info msg="StopPodSandbox for \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\" returns successfully" Jan 13 20:10:19.958723 containerd[1929]: time="2025-01-13T20:10:19.958244093Z" level=info msg="StopPodSandbox for \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\"" Jan 13 20:10:19.959304 systemd[1]: run-netns-cni\x2d7b8e723a\x2dbab8\x2d4db7\x2d83d9\x2d9b5ef95755a1.mount: Deactivated successfully. Jan 13 20:10:19.959741 containerd[1929]: time="2025-01-13T20:10:19.959532841Z" level=info msg="TearDown network for sandbox \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\" successfully" Jan 13 20:10:19.959741 containerd[1929]: time="2025-01-13T20:10:19.959571596Z" level=info msg="StopPodSandbox for \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\" returns successfully" Jan 13 20:10:19.963296 kubelet[2399]: I0113 20:10:19.960125 2399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776" Jan 13 20:10:19.963509 containerd[1929]: time="2025-01-13T20:10:19.962845032Z" level=info msg="StopPodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\"" Jan 13 20:10:19.963509 containerd[1929]: time="2025-01-13T20:10:19.963001590Z" level=info msg="TearDown network for sandbox \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" successfully" Jan 13 20:10:19.963509 containerd[1929]: time="2025-01-13T20:10:19.963024726Z" level=info msg="StopPodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" returns successfully" Jan 13 20:10:19.963710 containerd[1929]: time="2025-01-13T20:10:19.963569942Z" level=info msg="StopPodSandbox for \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\"" Jan 13 20:10:19.964390 containerd[1929]: time="2025-01-13T20:10:19.964083247Z" level=info msg="Ensure that sandbox 8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776 in task-service has been cleanup successfully" Jan 13 20:10:19.964390 containerd[1929]: time="2025-01-13T20:10:19.964232085Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\"" Jan 13 20:10:19.964559 containerd[1929]: time="2025-01-13T20:10:19.964522678Z" level=info msg="TearDown network for sandbox \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\" successfully" Jan 13 20:10:19.964559 containerd[1929]: time="2025-01-13T20:10:19.964548215Z" level=info msg="StopPodSandbox for \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\" returns successfully" Jan 13 20:10:19.967926 containerd[1929]: time="2025-01-13T20:10:19.964734476Z" level=info msg="TearDown network for sandbox \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" successfully" Jan 13 20:10:19.967926 containerd[1929]: time="2025-01-13T20:10:19.964787891Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" returns successfully" Jan 13 20:10:19.967926 containerd[1929]: time="2025-01-13T20:10:19.967456080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-7xnjr,Uid:95bdcc03-7517-44ab-a81e-a674bfc95b9f,Namespace:default,Attempt:1,}" Jan 13 20:10:19.968729 systemd[1]: run-netns-cni\x2d18bf44d3\x2dba30\x2da96a\x2d5736\x2d3acbdac1c169.mount: Deactivated successfully. Jan 13 20:10:19.976151 containerd[1929]: time="2025-01-13T20:10:19.976085117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:4,}" Jan 13 20:10:20.137031 containerd[1929]: time="2025-01-13T20:10:20.136951879Z" level=error msg="Failed to destroy network for sandbox \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:20.138173 containerd[1929]: time="2025-01-13T20:10:20.137892621Z" level=error msg="encountered an error cleaning up failed sandbox \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:20.138173 containerd[1929]: time="2025-01-13T20:10:20.137997818Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:20.138630 kubelet[2399]: E0113 20:10:20.138404 2399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:20.138630 kubelet[2399]: E0113 20:10:20.138560 2399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:20.138630 kubelet[2399]: E0113 20:10:20.138594 2399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:20.139448 kubelet[2399]: E0113 20:10:20.138653 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-57qj4_calico-system(eaac7307-8099-4833-a054-611983b758f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-57qj4_calico-system(eaac7307-8099-4833-a054-611983b758f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-57qj4" podUID="eaac7307-8099-4833-a054-611983b758f2" Jan 13 20:10:20.167356 containerd[1929]: time="2025-01-13T20:10:20.167244254Z" level=error msg="Failed to destroy network for sandbox \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:20.168493 containerd[1929]: time="2025-01-13T20:10:20.168112600Z" level=error msg="encountered an error cleaning up failed sandbox \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:20.168634 containerd[1929]: time="2025-01-13T20:10:20.168516650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-7xnjr,Uid:95bdcc03-7517-44ab-a81e-a674bfc95b9f,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:20.168906 kubelet[2399]: E0113 20:10:20.168813 2399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:20.169032 kubelet[2399]: E0113 20:10:20.168904 2399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-7xnjr" Jan 13 20:10:20.169032 kubelet[2399]: E0113 20:10:20.168943 2399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-7xnjr" Jan 13 20:10:20.169455 kubelet[2399]: E0113 20:10:20.169064 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-7xnjr_default(95bdcc03-7517-44ab-a81e-a674bfc95b9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-7xnjr_default(95bdcc03-7517-44ab-a81e-a674bfc95b9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-7xnjr" podUID="95bdcc03-7517-44ab-a81e-a674bfc95b9f" Jan 13 20:10:20.714901 kubelet[2399]: E0113 20:10:20.714813 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:20.945750 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215-shm.mount: Deactivated successfully. Jan 13 20:10:20.971108 kubelet[2399]: I0113 20:10:20.970930 2399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601" Jan 13 20:10:20.973595 containerd[1929]: time="2025-01-13T20:10:20.972564094Z" level=info msg="StopPodSandbox for \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\"" Jan 13 20:10:20.973595 containerd[1929]: time="2025-01-13T20:10:20.972852358Z" level=info msg="Ensure that sandbox caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601 in task-service has been cleanup successfully" Jan 13 20:10:20.976628 containerd[1929]: time="2025-01-13T20:10:20.976517070Z" level=info msg="TearDown network for sandbox \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\" successfully" Jan 13 20:10:20.976628 containerd[1929]: time="2025-01-13T20:10:20.976567747Z" level=info msg="StopPodSandbox for \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\" returns successfully" Jan 13 20:10:20.977870 kubelet[2399]: I0113 20:10:20.977791 2399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215" Jan 13 20:10:20.978471 systemd[1]: run-netns-cni\x2d22c9c1e5\x2ddcbc\x2ddc3f\x2d1a80\x2dba893e24f0ad.mount: Deactivated successfully. Jan 13 20:10:20.982746 containerd[1929]: time="2025-01-13T20:10:20.979852505Z" level=info msg="StopPodSandbox for \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\"" Jan 13 20:10:20.982746 containerd[1929]: time="2025-01-13T20:10:20.980288599Z" level=info msg="Ensure that sandbox 9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215 in task-service has been cleanup successfully" Jan 13 20:10:20.982746 containerd[1929]: time="2025-01-13T20:10:20.981439290Z" level=info msg="StopPodSandbox for \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\"" Jan 13 20:10:20.984442 containerd[1929]: time="2025-01-13T20:10:20.983443259Z" level=info msg="TearDown network for sandbox \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\" successfully" Jan 13 20:10:20.984442 containerd[1929]: time="2025-01-13T20:10:20.983488126Z" level=info msg="StopPodSandbox for \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\" returns successfully" Jan 13 20:10:20.986803 containerd[1929]: time="2025-01-13T20:10:20.984782697Z" level=info msg="TearDown network for sandbox \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\" successfully" Jan 13 20:10:20.986803 containerd[1929]: time="2025-01-13T20:10:20.984843591Z" level=info msg="StopPodSandbox for \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\" returns successfully" Jan 13 20:10:20.985385 systemd[1]: run-netns-cni\x2d3478654f\x2d4dd7\x2de41a\x2d6b28\x2dc184b1a435c5.mount: Deactivated successfully. Jan 13 20:10:20.989553 containerd[1929]: time="2025-01-13T20:10:20.989500034Z" level=info msg="StopPodSandbox for \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\"" Jan 13 20:10:20.989695 containerd[1929]: time="2025-01-13T20:10:20.989670351Z" level=info msg="TearDown network for sandbox \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\" successfully" Jan 13 20:10:20.989747 containerd[1929]: time="2025-01-13T20:10:20.989693967Z" level=info msg="StopPodSandbox for \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\" returns successfully" Jan 13 20:10:20.989846 containerd[1929]: time="2025-01-13T20:10:20.989814039Z" level=info msg="StopPodSandbox for \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\"" Jan 13 20:10:20.989964 containerd[1929]: time="2025-01-13T20:10:20.989931974Z" level=info msg="TearDown network for sandbox \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\" successfully" Jan 13 20:10:20.990028 containerd[1929]: time="2025-01-13T20:10:20.989963106Z" level=info msg="StopPodSandbox for \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\" returns successfully" Jan 13 20:10:20.992842 containerd[1929]: time="2025-01-13T20:10:20.992704820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-7xnjr,Uid:95bdcc03-7517-44ab-a81e-a674bfc95b9f,Namespace:default,Attempt:2,}" Jan 13 20:10:20.993838 containerd[1929]: time="2025-01-13T20:10:20.992723753Z" level=info msg="StopPodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\"" Jan 13 20:10:20.994764 containerd[1929]: time="2025-01-13T20:10:20.994715320Z" level=info msg="TearDown network for sandbox \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" successfully" Jan 13 20:10:20.994764 containerd[1929]: time="2025-01-13T20:10:20.994759995Z" level=info msg="StopPodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" returns successfully" Jan 13 20:10:20.996706 containerd[1929]: time="2025-01-13T20:10:20.996623146Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\"" Jan 13 20:10:20.997050 containerd[1929]: time="2025-01-13T20:10:20.996862750Z" level=info msg="TearDown network for sandbox \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" successfully" Jan 13 20:10:20.997050 containerd[1929]: time="2025-01-13T20:10:20.996963012Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" returns successfully" Jan 13 20:10:20.998871 containerd[1929]: time="2025-01-13T20:10:20.998734725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:5,}" Jan 13 20:10:21.210795 containerd[1929]: time="2025-01-13T20:10:21.210720298Z" level=error msg="Failed to destroy network for sandbox \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:21.213460 containerd[1929]: time="2025-01-13T20:10:21.213112830Z" level=error msg="encountered an error cleaning up failed sandbox \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:21.213612 containerd[1929]: time="2025-01-13T20:10:21.213544962Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-7xnjr,Uid:95bdcc03-7517-44ab-a81e-a674bfc95b9f,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:21.215108 kubelet[2399]: E0113 20:10:21.214068 2399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:21.215108 kubelet[2399]: E0113 20:10:21.214152 2399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-7xnjr" Jan 13 20:10:21.215108 kubelet[2399]: E0113 20:10:21.214191 2399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-7xnjr" Jan 13 20:10:21.215411 kubelet[2399]: E0113 20:10:21.214445 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-7xnjr_default(95bdcc03-7517-44ab-a81e-a674bfc95b9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-7xnjr_default(95bdcc03-7517-44ab-a81e-a674bfc95b9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-7xnjr" podUID="95bdcc03-7517-44ab-a81e-a674bfc95b9f" Jan 13 20:10:21.223452 containerd[1929]: time="2025-01-13T20:10:21.223072780Z" level=error msg="Failed to destroy network for sandbox \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:21.226683 containerd[1929]: time="2025-01-13T20:10:21.226617107Z" level=error msg="encountered an error cleaning up failed sandbox \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:21.226968 containerd[1929]: time="2025-01-13T20:10:21.226727754Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:21.227356 kubelet[2399]: E0113 20:10:21.227299 2399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:21.227571 kubelet[2399]: E0113 20:10:21.227383 2399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:21.227571 kubelet[2399]: E0113 20:10:21.227425 2399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:21.227571 kubelet[2399]: E0113 20:10:21.227509 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-57qj4_calico-system(eaac7307-8099-4833-a054-611983b758f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-57qj4_calico-system(eaac7307-8099-4833-a054-611983b758f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-57qj4" podUID="eaac7307-8099-4833-a054-611983b758f2" Jan 13 20:10:21.715909 kubelet[2399]: E0113 20:10:21.715827 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:21.942672 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd-shm.mount: Deactivated successfully. Jan 13 20:10:21.990572 kubelet[2399]: I0113 20:10:21.989633 2399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d" Jan 13 20:10:21.992967 containerd[1929]: time="2025-01-13T20:10:21.992914446Z" level=info msg="StopPodSandbox for \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\"" Jan 13 20:10:21.993572 containerd[1929]: time="2025-01-13T20:10:21.993180210Z" level=info msg="Ensure that sandbox b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d in task-service has been cleanup successfully" Jan 13 20:10:21.997669 systemd[1]: run-netns-cni\x2d9e668579\x2dd3aa\x2d224b\x2d7f6c\x2dbde6d0cc4f4a.mount: Deactivated successfully. Jan 13 20:10:21.999615 containerd[1929]: time="2025-01-13T20:10:21.998698888Z" level=info msg="TearDown network for sandbox \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\" successfully" Jan 13 20:10:21.999615 containerd[1929]: time="2025-01-13T20:10:21.998751799Z" level=info msg="StopPodSandbox for \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\" returns successfully" Jan 13 20:10:22.001031 containerd[1929]: time="2025-01-13T20:10:21.999909838Z" level=info msg="StopPodSandbox for \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\"" Jan 13 20:10:22.001031 containerd[1929]: time="2025-01-13T20:10:22.000117625Z" level=info msg="TearDown network for sandbox \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\" successfully" Jan 13 20:10:22.001031 containerd[1929]: time="2025-01-13T20:10:22.000142442Z" level=info msg="StopPodSandbox for \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\" returns successfully" Jan 13 20:10:22.001031 containerd[1929]: time="2025-01-13T20:10:22.000757341Z" level=info msg="StopPodSandbox for \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\"" Jan 13 20:10:22.001031 containerd[1929]: time="2025-01-13T20:10:22.000891892Z" level=info msg="TearDown network for sandbox \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\" successfully" Jan 13 20:10:22.001031 containerd[1929]: time="2025-01-13T20:10:22.000913647Z" level=info msg="StopPodSandbox for \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\" returns successfully" Jan 13 20:10:22.002922 containerd[1929]: time="2025-01-13T20:10:22.002807246Z" level=info msg="StopPodSandbox for \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\"" Jan 13 20:10:22.003096 containerd[1929]: time="2025-01-13T20:10:22.003000818Z" level=info msg="TearDown network for sandbox \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\" successfully" Jan 13 20:10:22.003096 containerd[1929]: time="2025-01-13T20:10:22.003025034Z" level=info msg="StopPodSandbox for \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\" returns successfully" Jan 13 20:10:22.003624 kubelet[2399]: I0113 20:10:22.003574 2399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd" Jan 13 20:10:22.005227 containerd[1929]: time="2025-01-13T20:10:22.005170158Z" level=info msg="StopPodSandbox for \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\"" Jan 13 20:10:22.005492 containerd[1929]: time="2025-01-13T20:10:22.005455937Z" level=info msg="Ensure that sandbox a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd in task-service has been cleanup successfully" Jan 13 20:10:22.006190 containerd[1929]: time="2025-01-13T20:10:22.006146414Z" level=info msg="StopPodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\"" Jan 13 20:10:22.006346 containerd[1929]: time="2025-01-13T20:10:22.006311305Z" level=info msg="TearDown network for sandbox \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" successfully" Jan 13 20:10:22.006418 containerd[1929]: time="2025-01-13T20:10:22.006343625Z" level=info msg="StopPodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" returns successfully" Jan 13 20:10:22.007345 containerd[1929]: time="2025-01-13T20:10:22.007301115Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\"" Jan 13 20:10:22.007479 containerd[1929]: time="2025-01-13T20:10:22.007455729Z" level=info msg="TearDown network for sandbox \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" successfully" Jan 13 20:10:22.007533 containerd[1929]: time="2025-01-13T20:10:22.007479152Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" returns successfully" Jan 13 20:10:22.009140 containerd[1929]: time="2025-01-13T20:10:22.009082914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:6,}" Jan 13 20:10:22.012723 containerd[1929]: time="2025-01-13T20:10:22.012567031Z" level=info msg="TearDown network for sandbox \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\" successfully" Jan 13 20:10:22.012723 containerd[1929]: time="2025-01-13T20:10:22.012617072Z" level=info msg="StopPodSandbox for \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\" returns successfully" Jan 13 20:10:22.013452 systemd[1]: run-netns-cni\x2dbb16f62f\x2de3a9\x2d491d\x2d638b\x2d947c558836a1.mount: Deactivated successfully. Jan 13 20:10:22.014882 containerd[1929]: time="2025-01-13T20:10:22.014703787Z" level=info msg="StopPodSandbox for \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\"" Jan 13 20:10:22.015012 containerd[1929]: time="2025-01-13T20:10:22.014928887Z" level=info msg="TearDown network for sandbox \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\" successfully" Jan 13 20:10:22.015012 containerd[1929]: time="2025-01-13T20:10:22.015000275Z" level=info msg="StopPodSandbox for \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\" returns successfully" Jan 13 20:10:22.017648 containerd[1929]: time="2025-01-13T20:10:22.017583966Z" level=info msg="StopPodSandbox for \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\"" Jan 13 20:10:22.017788 containerd[1929]: time="2025-01-13T20:10:22.017753095Z" level=info msg="TearDown network for sandbox \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\" successfully" Jan 13 20:10:22.017860 containerd[1929]: time="2025-01-13T20:10:22.017780432Z" level=info msg="StopPodSandbox for \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\" returns successfully" Jan 13 20:10:22.019540 containerd[1929]: time="2025-01-13T20:10:22.019488082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-7xnjr,Uid:95bdcc03-7517-44ab-a81e-a674bfc95b9f,Namespace:default,Attempt:3,}" Jan 13 20:10:22.198911 containerd[1929]: time="2025-01-13T20:10:22.198846462Z" level=error msg="Failed to destroy network for sandbox \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:22.199703 containerd[1929]: time="2025-01-13T20:10:22.199642363Z" level=error msg="encountered an error cleaning up failed sandbox \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:22.199822 containerd[1929]: time="2025-01-13T20:10:22.199750477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-7xnjr,Uid:95bdcc03-7517-44ab-a81e-a674bfc95b9f,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:22.200591 kubelet[2399]: E0113 20:10:22.200070 2399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:22.200591 kubelet[2399]: E0113 20:10:22.200144 2399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-7xnjr" Jan 13 20:10:22.200591 kubelet[2399]: E0113 20:10:22.200177 2399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-7xnjr" Jan 13 20:10:22.201041 kubelet[2399]: E0113 20:10:22.200309 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-7xnjr_default(95bdcc03-7517-44ab-a81e-a674bfc95b9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-7xnjr_default(95bdcc03-7517-44ab-a81e-a674bfc95b9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-7xnjr" podUID="95bdcc03-7517-44ab-a81e-a674bfc95b9f" Jan 13 20:10:22.253911 containerd[1929]: time="2025-01-13T20:10:22.253763431Z" level=error msg="Failed to destroy network for sandbox \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:22.257021 containerd[1929]: time="2025-01-13T20:10:22.256771678Z" level=error msg="encountered an error cleaning up failed sandbox \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:22.257021 containerd[1929]: time="2025-01-13T20:10:22.256872528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:22.257340 kubelet[2399]: E0113 20:10:22.257166 2399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:22.257417 kubelet[2399]: E0113 20:10:22.257377 2399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:22.257506 kubelet[2399]: E0113 20:10:22.257413 2399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:22.258008 kubelet[2399]: E0113 20:10:22.257540 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-57qj4_calico-system(eaac7307-8099-4833-a054-611983b758f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-57qj4_calico-system(eaac7307-8099-4833-a054-611983b758f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-57qj4" podUID="eaac7307-8099-4833-a054-611983b758f2" Jan 13 20:10:22.716421 kubelet[2399]: E0113 20:10:22.716342 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:22.944849 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3-shm.mount: Deactivated successfully. Jan 13 20:10:22.945133 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49-shm.mount: Deactivated successfully. Jan 13 20:10:23.017629 kubelet[2399]: I0113 20:10:23.017173 2399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49" Jan 13 20:10:23.019451 containerd[1929]: time="2025-01-13T20:10:23.018940958Z" level=info msg="StopPodSandbox for \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\"" Jan 13 20:10:23.019451 containerd[1929]: time="2025-01-13T20:10:23.019201344Z" level=info msg="Ensure that sandbox 1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49 in task-service has been cleanup successfully" Jan 13 20:10:23.023851 containerd[1929]: time="2025-01-13T20:10:23.023710785Z" level=info msg="TearDown network for sandbox \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\" successfully" Jan 13 20:10:23.023851 containerd[1929]: time="2025-01-13T20:10:23.023792474Z" level=info msg="StopPodSandbox for \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\" returns successfully" Jan 13 20:10:23.024065 systemd[1]: run-netns-cni\x2da60b0ea2\x2d51e3\x2d9191\x2d413b\x2ddc29eb71f87e.mount: Deactivated successfully. Jan 13 20:10:23.025660 containerd[1929]: time="2025-01-13T20:10:23.024878069Z" level=info msg="StopPodSandbox for \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\"" Jan 13 20:10:23.025660 containerd[1929]: time="2025-01-13T20:10:23.025038577Z" level=info msg="TearDown network for sandbox \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\" successfully" Jan 13 20:10:23.025660 containerd[1929]: time="2025-01-13T20:10:23.025061148Z" level=info msg="StopPodSandbox for \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\" returns successfully" Jan 13 20:10:23.027194 containerd[1929]: time="2025-01-13T20:10:23.027055981Z" level=info msg="StopPodSandbox for \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\"" Jan 13 20:10:23.028042 containerd[1929]: time="2025-01-13T20:10:23.027703777Z" level=info msg="TearDown network for sandbox \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\" successfully" Jan 13 20:10:23.028042 containerd[1929]: time="2025-01-13T20:10:23.027743469Z" level=info msg="StopPodSandbox for \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\" returns successfully" Jan 13 20:10:23.029116 containerd[1929]: time="2025-01-13T20:10:23.029073577Z" level=info msg="StopPodSandbox for \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\"" Jan 13 20:10:23.030062 containerd[1929]: time="2025-01-13T20:10:23.030023108Z" level=info msg="TearDown network for sandbox \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\" successfully" Jan 13 20:10:23.030204 containerd[1929]: time="2025-01-13T20:10:23.030176533Z" level=info msg="StopPodSandbox for \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\" returns successfully" Jan 13 20:10:23.031181 containerd[1929]: time="2025-01-13T20:10:23.030808877Z" level=info msg="StopPodSandbox for \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\"" Jan 13 20:10:23.031181 containerd[1929]: time="2025-01-13T20:10:23.030961485Z" level=info msg="TearDown network for sandbox \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\" successfully" Jan 13 20:10:23.031181 containerd[1929]: time="2025-01-13T20:10:23.030983672Z" level=info msg="StopPodSandbox for \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\" returns successfully" Jan 13 20:10:23.031778 kubelet[2399]: I0113 20:10:23.031725 2399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3" Jan 13 20:10:23.032134 containerd[1929]: time="2025-01-13T20:10:23.032091934Z" level=info msg="StopPodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\"" Jan 13 20:10:23.033692 containerd[1929]: time="2025-01-13T20:10:23.033538957Z" level=info msg="TearDown network for sandbox \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" successfully" Jan 13 20:10:23.033692 containerd[1929]: time="2025-01-13T20:10:23.033576932Z" level=info msg="StopPodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" returns successfully" Jan 13 20:10:23.034466 containerd[1929]: time="2025-01-13T20:10:23.034424147Z" level=info msg="StopPodSandbox for \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\"" Jan 13 20:10:23.034861 containerd[1929]: time="2025-01-13T20:10:23.034828618Z" level=info msg="Ensure that sandbox 9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3 in task-service has been cleanup successfully" Jan 13 20:10:23.035911 containerd[1929]: time="2025-01-13T20:10:23.035185448Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\"" Jan 13 20:10:23.036288 containerd[1929]: time="2025-01-13T20:10:23.036131221Z" level=info msg="TearDown network for sandbox \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" successfully" Jan 13 20:10:23.036288 containerd[1929]: time="2025-01-13T20:10:23.036158534Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" returns successfully" Jan 13 20:10:23.039324 containerd[1929]: time="2025-01-13T20:10:23.036968735Z" level=info msg="TearDown network for sandbox \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\" successfully" Jan 13 20:10:23.039324 containerd[1929]: time="2025-01-13T20:10:23.037008247Z" level=info msg="StopPodSandbox for \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\" returns successfully" Jan 13 20:10:23.040616 containerd[1929]: time="2025-01-13T20:10:23.040144970Z" level=info msg="StopPodSandbox for \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\"" Jan 13 20:10:23.041178 systemd[1]: run-netns-cni\x2dcc070623\x2dea02\x2d9885\x2d4498\x2de28777d226a5.mount: Deactivated successfully. Jan 13 20:10:23.042021 containerd[1929]: time="2025-01-13T20:10:23.041976702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:7,}" Jan 13 20:10:23.043622 containerd[1929]: time="2025-01-13T20:10:23.042753730Z" level=info msg="TearDown network for sandbox \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\" successfully" Jan 13 20:10:23.043622 containerd[1929]: time="2025-01-13T20:10:23.042790949Z" level=info msg="StopPodSandbox for \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\" returns successfully" Jan 13 20:10:23.044566 containerd[1929]: time="2025-01-13T20:10:23.044431569Z" level=info msg="StopPodSandbox for \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\"" Jan 13 20:10:23.045173 containerd[1929]: time="2025-01-13T20:10:23.044881325Z" level=info msg="TearDown network for sandbox \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\" successfully" Jan 13 20:10:23.045173 containerd[1929]: time="2025-01-13T20:10:23.044906982Z" level=info msg="StopPodSandbox for \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\" returns successfully" Jan 13 20:10:23.045961 containerd[1929]: time="2025-01-13T20:10:23.045697817Z" level=info msg="StopPodSandbox for \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\"" Jan 13 20:10:23.045961 containerd[1929]: time="2025-01-13T20:10:23.045850198Z" level=info msg="TearDown network for sandbox \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\" successfully" Jan 13 20:10:23.045961 containerd[1929]: time="2025-01-13T20:10:23.045871448Z" level=info msg="StopPodSandbox for \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\" returns successfully" Jan 13 20:10:23.048948 containerd[1929]: time="2025-01-13T20:10:23.048875962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-7xnjr,Uid:95bdcc03-7517-44ab-a81e-a674bfc95b9f,Namespace:default,Attempt:4,}" Jan 13 20:10:23.230069 containerd[1929]: time="2025-01-13T20:10:23.230011013Z" level=error msg="Failed to destroy network for sandbox \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:23.231392 containerd[1929]: time="2025-01-13T20:10:23.231330677Z" level=error msg="encountered an error cleaning up failed sandbox \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:23.231645 containerd[1929]: time="2025-01-13T20:10:23.231604041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-7xnjr,Uid:95bdcc03-7517-44ab-a81e-a674bfc95b9f,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:23.232782 kubelet[2399]: E0113 20:10:23.232723 2399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:23.232918 kubelet[2399]: E0113 20:10:23.232803 2399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-7xnjr" Jan 13 20:10:23.232918 kubelet[2399]: E0113 20:10:23.232838 2399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-7xnjr" Jan 13 20:10:23.233060 kubelet[2399]: E0113 20:10:23.232905 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-7xnjr_default(95bdcc03-7517-44ab-a81e-a674bfc95b9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-7xnjr_default(95bdcc03-7517-44ab-a81e-a674bfc95b9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-7xnjr" podUID="95bdcc03-7517-44ab-a81e-a674bfc95b9f" Jan 13 20:10:23.267460 containerd[1929]: time="2025-01-13T20:10:23.267147408Z" level=error msg="Failed to destroy network for sandbox \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:23.268092 containerd[1929]: time="2025-01-13T20:10:23.267773581Z" level=error msg="encountered an error cleaning up failed sandbox \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:23.268092 containerd[1929]: time="2025-01-13T20:10:23.267874576Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:23.269396 kubelet[2399]: E0113 20:10:23.268167 2399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:23.269396 kubelet[2399]: E0113 20:10:23.268309 2399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:23.269396 kubelet[2399]: E0113 20:10:23.268350 2399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:23.269619 kubelet[2399]: E0113 20:10:23.268419 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-57qj4_calico-system(eaac7307-8099-4833-a054-611983b758f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-57qj4_calico-system(eaac7307-8099-4833-a054-611983b758f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-57qj4" podUID="eaac7307-8099-4833-a054-611983b758f2" Jan 13 20:10:23.717827 kubelet[2399]: E0113 20:10:23.717464 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:23.943355 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162-shm.mount: Deactivated successfully. Jan 13 20:10:24.044262 kubelet[2399]: I0113 20:10:24.044129 2399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162" Jan 13 20:10:24.045965 containerd[1929]: time="2025-01-13T20:10:24.045696318Z" level=info msg="StopPodSandbox for \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\"" Jan 13 20:10:24.046885 containerd[1929]: time="2025-01-13T20:10:24.046671625Z" level=info msg="Ensure that sandbox 7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162 in task-service has been cleanup successfully" Jan 13 20:10:24.050243 containerd[1929]: time="2025-01-13T20:10:24.047986018Z" level=info msg="TearDown network for sandbox \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\" successfully" Jan 13 20:10:24.050243 containerd[1929]: time="2025-01-13T20:10:24.048038712Z" level=info msg="StopPodSandbox for \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\" returns successfully" Jan 13 20:10:24.051037 containerd[1929]: time="2025-01-13T20:10:24.050700467Z" level=info msg="StopPodSandbox for \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\"" Jan 13 20:10:24.051037 containerd[1929]: time="2025-01-13T20:10:24.050873149Z" level=info msg="TearDown network for sandbox \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\" successfully" Jan 13 20:10:24.051037 containerd[1929]: time="2025-01-13T20:10:24.050896849Z" level=info msg="StopPodSandbox for \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\" returns successfully" Jan 13 20:10:24.052165 systemd[1]: run-netns-cni\x2d563aea62\x2ddb5c\x2da527\x2d0f26\x2d7a35f3fd6bdc.mount: Deactivated successfully. Jan 13 20:10:24.053174 containerd[1929]: time="2025-01-13T20:10:24.052540386Z" level=info msg="StopPodSandbox for \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\"" Jan 13 20:10:24.053174 containerd[1929]: time="2025-01-13T20:10:24.052696500Z" level=info msg="TearDown network for sandbox \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\" successfully" Jan 13 20:10:24.053174 containerd[1929]: time="2025-01-13T20:10:24.052718723Z" level=info msg="StopPodSandbox for \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\" returns successfully" Jan 13 20:10:24.055577 containerd[1929]: time="2025-01-13T20:10:24.054808260Z" level=info msg="StopPodSandbox for \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\"" Jan 13 20:10:24.055577 containerd[1929]: time="2025-01-13T20:10:24.055313280Z" level=info msg="TearDown network for sandbox \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\" successfully" Jan 13 20:10:24.055577 containerd[1929]: time="2025-01-13T20:10:24.055345312Z" level=info msg="StopPodSandbox for \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\" returns successfully" Jan 13 20:10:24.057526 containerd[1929]: time="2025-01-13T20:10:24.056772873Z" level=info msg="StopPodSandbox for \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\"" Jan 13 20:10:24.057526 containerd[1929]: time="2025-01-13T20:10:24.057038098Z" level=info msg="TearDown network for sandbox \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\" successfully" Jan 13 20:10:24.057526 containerd[1929]: time="2025-01-13T20:10:24.057063671Z" level=info msg="StopPodSandbox for \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\" returns successfully" Jan 13 20:10:24.057894 containerd[1929]: time="2025-01-13T20:10:24.057846642Z" level=info msg="StopPodSandbox for \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\"" Jan 13 20:10:24.058049 containerd[1929]: time="2025-01-13T20:10:24.058000787Z" level=info msg="TearDown network for sandbox \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\" successfully" Jan 13 20:10:24.058049 containerd[1929]: time="2025-01-13T20:10:24.058031943Z" level=info msg="StopPodSandbox for \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\" returns successfully" Jan 13 20:10:24.058533 kubelet[2399]: I0113 20:10:24.058471 2399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d" Jan 13 20:10:24.059063 containerd[1929]: time="2025-01-13T20:10:24.059010083Z" level=info msg="StopPodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\"" Jan 13 20:10:24.059256 containerd[1929]: time="2025-01-13T20:10:24.059170183Z" level=info msg="TearDown network for sandbox \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" successfully" Jan 13 20:10:24.059256 containerd[1929]: time="2025-01-13T20:10:24.059192587Z" level=info msg="StopPodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" returns successfully" Jan 13 20:10:24.060680 containerd[1929]: time="2025-01-13T20:10:24.060501541Z" level=info msg="StopPodSandbox for \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\"" Jan 13 20:10:24.060982 containerd[1929]: time="2025-01-13T20:10:24.060776598Z" level=info msg="Ensure that sandbox 718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d in task-service has been cleanup successfully" Jan 13 20:10:24.063559 containerd[1929]: time="2025-01-13T20:10:24.063469040Z" level=info msg="TearDown network for sandbox \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\" successfully" Jan 13 20:10:24.063559 containerd[1929]: time="2025-01-13T20:10:24.063543729Z" level=info msg="StopPodSandbox for \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\" returns successfully" Jan 13 20:10:24.065753 containerd[1929]: time="2025-01-13T20:10:24.065589167Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\"" Jan 13 20:10:24.065753 containerd[1929]: time="2025-01-13T20:10:24.065634142Z" level=info msg="StopPodSandbox for \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\"" Jan 13 20:10:24.065922 containerd[1929]: time="2025-01-13T20:10:24.065777025Z" level=info msg="TearDown network for sandbox \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\" successfully" Jan 13 20:10:24.065922 containerd[1929]: time="2025-01-13T20:10:24.065799429Z" level=info msg="StopPodSandbox for \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\" returns successfully" Jan 13 20:10:24.065966 systemd[1]: run-netns-cni\x2dd1113e05\x2dae60\x2da415\x2da97f\x2dc3380e4b2244.mount: Deactivated successfully. Jan 13 20:10:24.068307 containerd[1929]: time="2025-01-13T20:10:24.066716723Z" level=info msg="TearDown network for sandbox \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" successfully" Jan 13 20:10:24.068307 containerd[1929]: time="2025-01-13T20:10:24.066754938Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" returns successfully" Jan 13 20:10:24.068740 containerd[1929]: time="2025-01-13T20:10:24.068618281Z" level=info msg="StopPodSandbox for \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\"" Jan 13 20:10:24.069423 containerd[1929]: time="2025-01-13T20:10:24.069387434Z" level=info msg="TearDown network for sandbox \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\" successfully" Jan 13 20:10:24.069540 containerd[1929]: time="2025-01-13T20:10:24.069513509Z" level=info msg="StopPodSandbox for \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\" returns successfully" Jan 13 20:10:24.069996 containerd[1929]: time="2025-01-13T20:10:24.069945328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:8,}" Jan 13 20:10:24.071833 containerd[1929]: time="2025-01-13T20:10:24.071672223Z" level=info msg="StopPodSandbox for \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\"" Jan 13 20:10:24.072450 containerd[1929]: time="2025-01-13T20:10:24.072408515Z" level=info msg="TearDown network for sandbox \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\" successfully" Jan 13 20:10:24.072574 containerd[1929]: time="2025-01-13T20:10:24.072449744Z" level=info msg="StopPodSandbox for \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\" returns successfully" Jan 13 20:10:24.074391 containerd[1929]: time="2025-01-13T20:10:24.073927202Z" level=info msg="StopPodSandbox for \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\"" Jan 13 20:10:24.074391 containerd[1929]: time="2025-01-13T20:10:24.074091565Z" level=info msg="TearDown network for sandbox \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\" successfully" Jan 13 20:10:24.074391 containerd[1929]: time="2025-01-13T20:10:24.074113211Z" level=info msg="StopPodSandbox for \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\" returns successfully" Jan 13 20:10:24.075385 containerd[1929]: time="2025-01-13T20:10:24.075342049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-7xnjr,Uid:95bdcc03-7517-44ab-a81e-a674bfc95b9f,Namespace:default,Attempt:5,}" Jan 13 20:10:24.260697 containerd[1929]: time="2025-01-13T20:10:24.260156736Z" level=error msg="Failed to destroy network for sandbox \"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:24.260855 containerd[1929]: time="2025-01-13T20:10:24.260768129Z" level=error msg="encountered an error cleaning up failed sandbox \"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:24.260913 containerd[1929]: time="2025-01-13T20:10:24.260858030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:24.261198 kubelet[2399]: E0113 20:10:24.261152 2399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:24.262013 kubelet[2399]: E0113 20:10:24.261766 2399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:24.262013 kubelet[2399]: E0113 20:10:24.261842 2399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:24.262013 kubelet[2399]: E0113 20:10:24.261956 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-57qj4_calico-system(eaac7307-8099-4833-a054-611983b758f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-57qj4_calico-system(eaac7307-8099-4833-a054-611983b758f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-57qj4" podUID="eaac7307-8099-4833-a054-611983b758f2" Jan 13 20:10:24.288105 containerd[1929]: time="2025-01-13T20:10:24.287907453Z" level=error msg="Failed to destroy network for sandbox \"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:24.289132 containerd[1929]: time="2025-01-13T20:10:24.289068985Z" level=error msg="encountered an error cleaning up failed sandbox \"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:24.289351 containerd[1929]: time="2025-01-13T20:10:24.289172813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-7xnjr,Uid:95bdcc03-7517-44ab-a81e-a674bfc95b9f,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:24.290082 kubelet[2399]: E0113 20:10:24.289785 2399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:24.290082 kubelet[2399]: E0113 20:10:24.289875 2399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-7xnjr" Jan 13 20:10:24.290082 kubelet[2399]: E0113 20:10:24.289910 2399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-7xnjr" Jan 13 20:10:24.290444 kubelet[2399]: E0113 20:10:24.290004 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-7xnjr_default(95bdcc03-7517-44ab-a81e-a674bfc95b9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-7xnjr_default(95bdcc03-7517-44ab-a81e-a674bfc95b9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-7xnjr" podUID="95bdcc03-7517-44ab-a81e-a674bfc95b9f" Jan 13 20:10:24.718048 kubelet[2399]: E0113 20:10:24.717974 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:24.943316 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052-shm.mount: Deactivated successfully. Jan 13 20:10:25.068861 kubelet[2399]: I0113 20:10:25.068825 2399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052" Jan 13 20:10:25.070820 containerd[1929]: time="2025-01-13T20:10:25.070201982Z" level=info msg="StopPodSandbox for \"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\"" Jan 13 20:10:25.070820 containerd[1929]: time="2025-01-13T20:10:25.070494940Z" level=info msg="Ensure that sandbox 7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052 in task-service has been cleanup successfully" Jan 13 20:10:25.074152 systemd[1]: run-netns-cni\x2d4d32da2a\x2d6dbf\x2dfc72\x2d7beb\x2d3a3e2ad2fb7f.mount: Deactivated successfully. Jan 13 20:10:25.078283 containerd[1929]: time="2025-01-13T20:10:25.077851076Z" level=info msg="TearDown network for sandbox \"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\" successfully" Jan 13 20:10:25.078283 containerd[1929]: time="2025-01-13T20:10:25.078107104Z" level=info msg="StopPodSandbox for \"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\" returns successfully" Jan 13 20:10:25.079587 containerd[1929]: time="2025-01-13T20:10:25.079528363Z" level=info msg="StopPodSandbox for \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\"" Jan 13 20:10:25.079707 containerd[1929]: time="2025-01-13T20:10:25.079687718Z" level=info msg="TearDown network for sandbox \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\" successfully" Jan 13 20:10:25.079762 containerd[1929]: time="2025-01-13T20:10:25.079710938Z" level=info msg="StopPodSandbox for \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\" returns successfully" Jan 13 20:10:25.080777 containerd[1929]: time="2025-01-13T20:10:25.080735566Z" level=info msg="StopPodSandbox for \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\"" Jan 13 20:10:25.081193 containerd[1929]: time="2025-01-13T20:10:25.081080186Z" level=info msg="TearDown network for sandbox \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\" successfully" Jan 13 20:10:25.081193 containerd[1929]: time="2025-01-13T20:10:25.081132917Z" level=info msg="StopPodSandbox for \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\" returns successfully" Jan 13 20:10:25.083330 containerd[1929]: time="2025-01-13T20:10:25.083172952Z" level=info msg="StopPodSandbox for \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\"" Jan 13 20:10:25.083639 containerd[1929]: time="2025-01-13T20:10:25.083519974Z" level=info msg="TearDown network for sandbox \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\" successfully" Jan 13 20:10:25.083639 containerd[1929]: time="2025-01-13T20:10:25.083576702Z" level=info msg="StopPodSandbox for \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\" returns successfully" Jan 13 20:10:25.085084 containerd[1929]: time="2025-01-13T20:10:25.084911781Z" level=info msg="StopPodSandbox for \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\"" Jan 13 20:10:25.085428 containerd[1929]: time="2025-01-13T20:10:25.085292623Z" level=info msg="TearDown network for sandbox \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\" successfully" Jan 13 20:10:25.085428 containerd[1929]: time="2025-01-13T20:10:25.085345822Z" level=info msg="StopPodSandbox for \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\" returns successfully" Jan 13 20:10:25.086523 containerd[1929]: time="2025-01-13T20:10:25.086395435Z" level=info msg="StopPodSandbox for \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\"" Jan 13 20:10:25.086633 containerd[1929]: time="2025-01-13T20:10:25.086560913Z" level=info msg="TearDown network for sandbox \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\" successfully" Jan 13 20:10:25.086633 containerd[1929]: time="2025-01-13T20:10:25.086587050Z" level=info msg="StopPodSandbox for \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\" returns successfully" Jan 13 20:10:25.088242 containerd[1929]: time="2025-01-13T20:10:25.087476623Z" level=info msg="StopPodSandbox for \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\"" Jan 13 20:10:25.088242 containerd[1929]: time="2025-01-13T20:10:25.087634670Z" level=info msg="TearDown network for sandbox \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\" successfully" Jan 13 20:10:25.088242 containerd[1929]: time="2025-01-13T20:10:25.087657133Z" level=info msg="StopPodSandbox for \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\" returns successfully" Jan 13 20:10:25.088488 kubelet[2399]: I0113 20:10:25.088254 2399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c" Jan 13 20:10:25.088694 containerd[1929]: time="2025-01-13T20:10:25.088643954Z" level=info msg="StopPodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\"" Jan 13 20:10:25.088835 containerd[1929]: time="2025-01-13T20:10:25.088795830Z" level=info msg="TearDown network for sandbox \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" successfully" Jan 13 20:10:25.088910 containerd[1929]: time="2025-01-13T20:10:25.088828571Z" level=info msg="StopPodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" returns successfully" Jan 13 20:10:25.090324 containerd[1929]: time="2025-01-13T20:10:25.090266301Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\"" Jan 13 20:10:25.090475 containerd[1929]: time="2025-01-13T20:10:25.090426269Z" level=info msg="TearDown network for sandbox \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" successfully" Jan 13 20:10:25.090475 containerd[1929]: time="2025-01-13T20:10:25.090449393Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" returns successfully" Jan 13 20:10:25.091549 containerd[1929]: time="2025-01-13T20:10:25.091495055Z" level=info msg="StopPodSandbox for \"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\"" Jan 13 20:10:25.091802 containerd[1929]: time="2025-01-13T20:10:25.091760160Z" level=info msg="Ensure that sandbox 3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c in task-service has been cleanup successfully" Jan 13 20:10:25.092900 containerd[1929]: time="2025-01-13T20:10:25.092542170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:9,}" Jan 13 20:10:25.095410 containerd[1929]: time="2025-01-13T20:10:25.094418792Z" level=info msg="TearDown network for sandbox \"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\" successfully" Jan 13 20:10:25.095410 containerd[1929]: time="2025-01-13T20:10:25.095399142Z" level=info msg="StopPodSandbox for \"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\" returns successfully" Jan 13 20:10:25.097396 containerd[1929]: time="2025-01-13T20:10:25.096273011Z" level=info msg="StopPodSandbox for \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\"" Jan 13 20:10:25.097396 containerd[1929]: time="2025-01-13T20:10:25.096431610Z" level=info msg="TearDown network for sandbox \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\" successfully" Jan 13 20:10:25.097396 containerd[1929]: time="2025-01-13T20:10:25.096454097Z" level=info msg="StopPodSandbox for \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\" returns successfully" Jan 13 20:10:25.097396 containerd[1929]: time="2025-01-13T20:10:25.097137275Z" level=info msg="StopPodSandbox for \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\"" Jan 13 20:10:25.097396 containerd[1929]: time="2025-01-13T20:10:25.097342674Z" level=info msg="TearDown network for sandbox \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\" successfully" Jan 13 20:10:25.096951 systemd[1]: run-netns-cni\x2d5d0f7a99\x2df3bd\x2d0c16\x2de876\x2d7d92eda78492.mount: Deactivated successfully. Jan 13 20:10:25.098375 containerd[1929]: time="2025-01-13T20:10:25.097893569Z" level=info msg="StopPodSandbox for \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\" returns successfully" Jan 13 20:10:25.098862 containerd[1929]: time="2025-01-13T20:10:25.098712750Z" level=info msg="StopPodSandbox for \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\"" Jan 13 20:10:25.100070 containerd[1929]: time="2025-01-13T20:10:25.099431598Z" level=info msg="TearDown network for sandbox \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\" successfully" Jan 13 20:10:25.100070 containerd[1929]: time="2025-01-13T20:10:25.099467399Z" level=info msg="StopPodSandbox for \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\" returns successfully" Jan 13 20:10:25.103955 containerd[1929]: time="2025-01-13T20:10:25.103617297Z" level=info msg="StopPodSandbox for \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\"" Jan 13 20:10:25.103955 containerd[1929]: time="2025-01-13T20:10:25.103781371Z" level=info msg="TearDown network for sandbox \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\" successfully" Jan 13 20:10:25.103955 containerd[1929]: time="2025-01-13T20:10:25.103803979Z" level=info msg="StopPodSandbox for \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\" returns successfully" Jan 13 20:10:25.104809 containerd[1929]: time="2025-01-13T20:10:25.104729149Z" level=info msg="StopPodSandbox for \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\"" Jan 13 20:10:25.105568 containerd[1929]: time="2025-01-13T20:10:25.105527668Z" level=info msg="TearDown network for sandbox \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\" successfully" Jan 13 20:10:25.106015 containerd[1929]: time="2025-01-13T20:10:25.105565103Z" level=info msg="StopPodSandbox for \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\" returns successfully" Jan 13 20:10:25.108808 containerd[1929]: time="2025-01-13T20:10:25.108733810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-7xnjr,Uid:95bdcc03-7517-44ab-a81e-a674bfc95b9f,Namespace:default,Attempt:6,}" Jan 13 20:10:25.313191 containerd[1929]: time="2025-01-13T20:10:25.311548769Z" level=error msg="Failed to destroy network for sandbox \"735fc36f32f4bb5da5b6a8a886c26db0a876a36a5dbf81151f0c18e78921f630\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:25.313916 containerd[1929]: time="2025-01-13T20:10:25.313861412Z" level=error msg="encountered an error cleaning up failed sandbox \"735fc36f32f4bb5da5b6a8a886c26db0a876a36a5dbf81151f0c18e78921f630\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:25.314093 containerd[1929]: time="2025-01-13T20:10:25.314054817Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-7xnjr,Uid:95bdcc03-7517-44ab-a81e-a674bfc95b9f,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"735fc36f32f4bb5da5b6a8a886c26db0a876a36a5dbf81151f0c18e78921f630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:25.314470 containerd[1929]: time="2025-01-13T20:10:25.313127894Z" level=error msg="Failed to destroy network for sandbox \"a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:25.315114 containerd[1929]: time="2025-01-13T20:10:25.314915647Z" level=error msg="encountered an error cleaning up failed sandbox \"a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:25.315114 containerd[1929]: time="2025-01-13T20:10:25.314996784Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:25.315704 kubelet[2399]: E0113 20:10:25.315644 2399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"735fc36f32f4bb5da5b6a8a886c26db0a876a36a5dbf81151f0c18e78921f630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:25.315811 kubelet[2399]: E0113 20:10:25.315730 2399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"735fc36f32f4bb5da5b6a8a886c26db0a876a36a5dbf81151f0c18e78921f630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-7xnjr" Jan 13 20:10:25.315811 kubelet[2399]: E0113 20:10:25.315772 2399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"735fc36f32f4bb5da5b6a8a886c26db0a876a36a5dbf81151f0c18e78921f630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-7xnjr" Jan 13 20:10:25.315932 kubelet[2399]: E0113 20:10:25.315833 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-7xnjr_default(95bdcc03-7517-44ab-a81e-a674bfc95b9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-7xnjr_default(95bdcc03-7517-44ab-a81e-a674bfc95b9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"735fc36f32f4bb5da5b6a8a886c26db0a876a36a5dbf81151f0c18e78921f630\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-7xnjr" podUID="95bdcc03-7517-44ab-a81e-a674bfc95b9f" Jan 13 20:10:25.316331 kubelet[2399]: E0113 20:10:25.315664 2399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:10:25.316331 kubelet[2399]: E0113 20:10:25.316118 2399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:25.316331 kubelet[2399]: E0113 20:10:25.316169 2399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57qj4" Jan 13 20:10:25.316619 kubelet[2399]: E0113 20:10:25.316261 2399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-57qj4_calico-system(eaac7307-8099-4833-a054-611983b758f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-57qj4_calico-system(eaac7307-8099-4833-a054-611983b758f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-57qj4" podUID="eaac7307-8099-4833-a054-611983b758f2" Jan 13 20:10:25.639932 containerd[1929]: time="2025-01-13T20:10:25.639861473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:25.641393 containerd[1929]: time="2025-01-13T20:10:25.641327778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 13 20:10:25.641966 containerd[1929]: time="2025-01-13T20:10:25.641900716Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:25.645931 containerd[1929]: time="2025-01-13T20:10:25.645836703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:25.647244 containerd[1929]: time="2025-01-13T20:10:25.647029644Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 8.715952419s" Jan 13 20:10:25.647244 containerd[1929]: time="2025-01-13T20:10:25.647085135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 13 20:10:25.659818 containerd[1929]: time="2025-01-13T20:10:25.659660544Z" level=info msg="CreateContainer within sandbox \"c206e1e91968a7e73cf3356148667095fd7416ccd4396a91e1f32d0b7564e707\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 20:10:25.679535 containerd[1929]: time="2025-01-13T20:10:25.679404363Z" level=info msg="CreateContainer within sandbox \"c206e1e91968a7e73cf3356148667095fd7416ccd4396a91e1f32d0b7564e707\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"18a1c4cd39d94a3b6f4031b87af9a6e6e1e143580fbd13d3685ee28448728868\"" Jan 13 20:10:25.680256 containerd[1929]: time="2025-01-13T20:10:25.680118720Z" level=info msg="StartContainer for \"18a1c4cd39d94a3b6f4031b87af9a6e6e1e143580fbd13d3685ee28448728868\"" Jan 13 20:10:25.719084 kubelet[2399]: E0113 20:10:25.719013 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:25.732523 systemd[1]: Started cri-containerd-18a1c4cd39d94a3b6f4031b87af9a6e6e1e143580fbd13d3685ee28448728868.scope - libcontainer container 18a1c4cd39d94a3b6f4031b87af9a6e6e1e143580fbd13d3685ee28448728868. Jan 13 20:10:25.788662 containerd[1929]: time="2025-01-13T20:10:25.788481995Z" level=info msg="StartContainer for \"18a1c4cd39d94a3b6f4031b87af9a6e6e1e143580fbd13d3685ee28448728868\" returns successfully" Jan 13 20:10:25.899915 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 20:10:25.900129 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 20:10:25.948357 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad-shm.mount: Deactivated successfully. Jan 13 20:10:25.948817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3647897734.mount: Deactivated successfully. Jan 13 20:10:26.100269 kubelet[2399]: I0113 20:10:26.100232 2399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad" Jan 13 20:10:26.102287 containerd[1929]: time="2025-01-13T20:10:26.102187792Z" level=info msg="StopPodSandbox for \"a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad\"" Jan 13 20:10:26.102891 containerd[1929]: time="2025-01-13T20:10:26.102496790Z" level=info msg="Ensure that sandbox a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad in task-service has been cleanup successfully" Jan 13 20:10:26.105194 containerd[1929]: time="2025-01-13T20:10:26.104760690Z" level=info msg="TearDown network for sandbox \"a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad\" successfully" Jan 13 20:10:26.105194 containerd[1929]: time="2025-01-13T20:10:26.104852932Z" level=info msg="StopPodSandbox for \"a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad\" returns successfully" Jan 13 20:10:26.107083 containerd[1929]: time="2025-01-13T20:10:26.107015104Z" level=info msg="StopPodSandbox for \"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\"" Jan 13 20:10:26.107261 containerd[1929]: time="2025-01-13T20:10:26.107183897Z" level=info msg="TearDown network for sandbox \"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\" successfully" Jan 13 20:10:26.107332 containerd[1929]: time="2025-01-13T20:10:26.107207044Z" level=info msg="StopPodSandbox for \"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\" returns successfully" Jan 13 20:10:26.107461 systemd[1]: run-netns-cni\x2dc38e6e3c\x2db1a7\x2d6a12\x2da2eb\x2d3a58e5c21f49.mount: Deactivated successfully. Jan 13 20:10:26.110735 containerd[1929]: time="2025-01-13T20:10:26.109488773Z" level=info msg="StopPodSandbox for \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\"" Jan 13 20:10:26.110735 containerd[1929]: time="2025-01-13T20:10:26.109692803Z" level=info msg="TearDown network for sandbox \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\" successfully" Jan 13 20:10:26.110735 containerd[1929]: time="2025-01-13T20:10:26.109717283Z" level=info msg="StopPodSandbox for \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\" returns successfully" Jan 13 20:10:26.110986 containerd[1929]: time="2025-01-13T20:10:26.110753545Z" level=info msg="StopPodSandbox for \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\"" Jan 13 20:10:26.110986 containerd[1929]: time="2025-01-13T20:10:26.110908266Z" level=info msg="TearDown network for sandbox \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\" successfully" Jan 13 20:10:26.110986 containerd[1929]: time="2025-01-13T20:10:26.110930141Z" level=info msg="StopPodSandbox for \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\" returns successfully" Jan 13 20:10:26.113167 containerd[1929]: time="2025-01-13T20:10:26.112297300Z" level=info msg="StopPodSandbox for \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\"" Jan 13 20:10:26.114050 containerd[1929]: time="2025-01-13T20:10:26.113423655Z" level=info msg="TearDown network for sandbox \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\" successfully" Jan 13 20:10:26.114050 containerd[1929]: time="2025-01-13T20:10:26.113456107Z" level=info msg="StopPodSandbox for \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\" returns successfully" Jan 13 20:10:26.115015 containerd[1929]: time="2025-01-13T20:10:26.114932065Z" level=info msg="StopPodSandbox for \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\"" Jan 13 20:10:26.115924 containerd[1929]: time="2025-01-13T20:10:26.115873395Z" level=info msg="TearDown network for sandbox \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\" successfully" Jan 13 20:10:26.115924 containerd[1929]: time="2025-01-13T20:10:26.115915741Z" level=info msg="StopPodSandbox for \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\" returns successfully" Jan 13 20:10:26.117347 containerd[1929]: time="2025-01-13T20:10:26.116631634Z" level=info msg="StopPodSandbox for \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\"" Jan 13 20:10:26.117347 containerd[1929]: time="2025-01-13T20:10:26.116780701Z" level=info msg="TearDown network for sandbox \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\" successfully" Jan 13 20:10:26.117347 containerd[1929]: time="2025-01-13T20:10:26.116802156Z" level=info msg="StopPodSandbox for \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\" returns successfully" Jan 13 20:10:26.117347 containerd[1929]: time="2025-01-13T20:10:26.117071798Z" level=info msg="StopPodSandbox for \"735fc36f32f4bb5da5b6a8a886c26db0a876a36a5dbf81151f0c18e78921f630\"" Jan 13 20:10:26.117559 kubelet[2399]: I0113 20:10:26.115904 2399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="735fc36f32f4bb5da5b6a8a886c26db0a876a36a5dbf81151f0c18e78921f630" Jan 13 20:10:26.117680 containerd[1929]: time="2025-01-13T20:10:26.117400391Z" level=info msg="Ensure that sandbox 735fc36f32f4bb5da5b6a8a886c26db0a876a36a5dbf81151f0c18e78921f630 in task-service has been cleanup successfully" Jan 13 20:10:26.117780 containerd[1929]: time="2025-01-13T20:10:26.117736030Z" level=info msg="TearDown network for sandbox \"735fc36f32f4bb5da5b6a8a886c26db0a876a36a5dbf81151f0c18e78921f630\" successfully" Jan 13 20:10:26.117873 containerd[1929]: time="2025-01-13T20:10:26.117770319Z" level=info msg="StopPodSandbox for \"735fc36f32f4bb5da5b6a8a886c26db0a876a36a5dbf81151f0c18e78921f630\" returns successfully" Jan 13 20:10:26.120446 containerd[1929]: time="2025-01-13T20:10:26.120278865Z" level=info msg="StopPodSandbox for \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\"" Jan 13 20:10:26.120919 containerd[1929]: time="2025-01-13T20:10:26.120451391Z" level=info msg="TearDown network for sandbox \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\" successfully" Jan 13 20:10:26.120919 containerd[1929]: time="2025-01-13T20:10:26.120476940Z" level=info msg="StopPodSandbox for \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\" returns successfully" Jan 13 20:10:26.122634 containerd[1929]: time="2025-01-13T20:10:26.122573560Z" level=info msg="StopPodSandbox for \"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\"" Jan 13 20:10:26.123096 containerd[1929]: time="2025-01-13T20:10:26.122737622Z" level=info msg="TearDown network for sandbox \"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\" successfully" Jan 13 20:10:26.123096 containerd[1929]: time="2025-01-13T20:10:26.122760493Z" level=info msg="StopPodSandbox for \"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\" returns successfully" Jan 13 20:10:26.124680 systemd[1]: run-netns-cni\x2dfed74322\x2d9546\x2ddd67\x2d261b\x2d5b6cc65fa85f.mount: Deactivated successfully. Jan 13 20:10:26.125447 containerd[1929]: time="2025-01-13T20:10:26.124760621Z" level=info msg="StopPodSandbox for \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\"" Jan 13 20:10:26.125447 containerd[1929]: time="2025-01-13T20:10:26.124906998Z" level=info msg="TearDown network for sandbox \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\" successfully" Jan 13 20:10:26.125447 containerd[1929]: time="2025-01-13T20:10:26.124929605Z" level=info msg="StopPodSandbox for \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\" returns successfully" Jan 13 20:10:26.125447 containerd[1929]: time="2025-01-13T20:10:26.125063292Z" level=info msg="StopPodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\"" Jan 13 20:10:26.125447 containerd[1929]: time="2025-01-13T20:10:26.125182296Z" level=info msg="TearDown network for sandbox \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" successfully" Jan 13 20:10:26.125447 containerd[1929]: time="2025-01-13T20:10:26.125203798Z" level=info msg="StopPodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" returns successfully" Jan 13 20:10:26.128147 containerd[1929]: time="2025-01-13T20:10:26.127377077Z" level=info msg="StopPodSandbox for \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\"" Jan 13 20:10:26.128147 containerd[1929]: time="2025-01-13T20:10:26.127541006Z" level=info msg="TearDown network for sandbox \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\" successfully" Jan 13 20:10:26.128147 containerd[1929]: time="2025-01-13T20:10:26.127563122Z" level=info msg="StopPodSandbox for \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\" returns successfully" Jan 13 20:10:26.128147 containerd[1929]: time="2025-01-13T20:10:26.127377053Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\"" Jan 13 20:10:26.128147 containerd[1929]: time="2025-01-13T20:10:26.127735636Z" level=info msg="TearDown network for sandbox \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" successfully" Jan 13 20:10:26.128147 containerd[1929]: time="2025-01-13T20:10:26.127756574Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" returns successfully" Jan 13 20:10:26.129636 containerd[1929]: time="2025-01-13T20:10:26.129564594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:10,}" Jan 13 20:10:26.130016 containerd[1929]: time="2025-01-13T20:10:26.129955245Z" level=info msg="StopPodSandbox for \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\"" Jan 13 20:10:26.130376 containerd[1929]: time="2025-01-13T20:10:26.130345668Z" level=info msg="TearDown network for sandbox \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\" successfully" Jan 13 20:10:26.130582 containerd[1929]: time="2025-01-13T20:10:26.130440035Z" level=info msg="StopPodSandbox for \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\" returns successfully" Jan 13 20:10:26.131355 containerd[1929]: time="2025-01-13T20:10:26.131311215Z" level=info msg="StopPodSandbox for \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\"" Jan 13 20:10:26.131716 containerd[1929]: time="2025-01-13T20:10:26.131467809Z" level=info msg="TearDown network for sandbox \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\" successfully" Jan 13 20:10:26.131716 containerd[1929]: time="2025-01-13T20:10:26.131499817Z" level=info msg="StopPodSandbox for \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\" returns successfully" Jan 13 20:10:26.133500 containerd[1929]: time="2025-01-13T20:10:26.133068953Z" level=info msg="StopPodSandbox for \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\"" Jan 13 20:10:26.133500 containerd[1929]: time="2025-01-13T20:10:26.133297908Z" level=info msg="TearDown network for sandbox \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\" successfully" Jan 13 20:10:26.133500 containerd[1929]: time="2025-01-13T20:10:26.133321511Z" level=info msg="StopPodSandbox for \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\" returns successfully" Jan 13 20:10:26.135814 containerd[1929]: time="2025-01-13T20:10:26.135694257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-7xnjr,Uid:95bdcc03-7517-44ab-a81e-a674bfc95b9f,Namespace:default,Attempt:7,}" Jan 13 20:10:26.575304 systemd-networkd[1850]: cali729a1448bc8: Link UP Jan 13 20:10:26.575525 (udev-worker)[3470]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:10:26.577988 systemd-networkd[1850]: cali729a1448bc8: Gained carrier Jan 13 20:10:26.598579 kubelet[2399]: I0113 20:10:26.598500 2399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-shb9d" podStartSLOduration=5.227810353 podStartE2EDuration="27.59847642s" podCreationTimestamp="2025-01-13 20:09:59 +0000 UTC" firstStartedPulling="2025-01-13 20:10:03.277827036 +0000 UTC m=+5.147756372" lastFinishedPulling="2025-01-13 20:10:25.648493103 +0000 UTC m=+27.518422439" observedRunningTime="2025-01-13 20:10:26.182453798 +0000 UTC m=+28.052383194" watchObservedRunningTime="2025-01-13 20:10:26.59847642 +0000 UTC m=+28.468405768" Jan 13 20:10:26.601310 containerd[1929]: 2025-01-13 20:10:26.301 [INFO][3506] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:10:26.601310 containerd[1929]: 2025-01-13 20:10:26.370 [INFO][3506] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.26.215-k8s-nginx--deployment--85f456d6dd--7xnjr-eth0 nginx-deployment-85f456d6dd- default 95bdcc03-7517-44ab-a81e-a674bfc95b9f 1024 0 2025-01-13 20:10:19 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.26.215 nginx-deployment-85f456d6dd-7xnjr eth0 default [] [] [kns.default ksa.default.default] cali729a1448bc8 [] []}} ContainerID="3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db" Namespace="default" Pod="nginx-deployment-85f456d6dd-7xnjr" WorkloadEndpoint="172.31.26.215-k8s-nginx--deployment--85f456d6dd--7xnjr-" Jan 13 20:10:26.601310 containerd[1929]: 2025-01-13 20:10:26.370 [INFO][3506] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db" Namespace="default" Pod="nginx-deployment-85f456d6dd-7xnjr" WorkloadEndpoint="172.31.26.215-k8s-nginx--deployment--85f456d6dd--7xnjr-eth0" Jan 13 20:10:26.601310 containerd[1929]: 2025-01-13 20:10:26.454 [INFO][3535] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db" HandleID="k8s-pod-network.3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db" Workload="172.31.26.215-k8s-nginx--deployment--85f456d6dd--7xnjr-eth0" Jan 13 20:10:26.601310 containerd[1929]: 2025-01-13 20:10:26.490 [INFO][3535] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db" HandleID="k8s-pod-network.3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db" Workload="172.31.26.215-k8s-nginx--deployment--85f456d6dd--7xnjr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cb70), Attrs:map[string]string{"namespace":"default", "node":"172.31.26.215", "pod":"nginx-deployment-85f456d6dd-7xnjr", "timestamp":"2025-01-13 20:10:26.454012663 +0000 UTC"}, Hostname:"172.31.26.215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:10:26.601310 containerd[1929]: 2025-01-13 20:10:26.491 [INFO][3535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:10:26.601310 containerd[1929]: 2025-01-13 20:10:26.491 [INFO][3535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:10:26.601310 containerd[1929]: 2025-01-13 20:10:26.491 [INFO][3535] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.26.215' Jan 13 20:10:26.601310 containerd[1929]: 2025-01-13 20:10:26.496 [INFO][3535] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db" host="172.31.26.215" Jan 13 20:10:26.601310 containerd[1929]: 2025-01-13 20:10:26.504 [INFO][3535] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.26.215" Jan 13 20:10:26.601310 containerd[1929]: 2025-01-13 20:10:26.513 [INFO][3535] ipam/ipam.go 489: Trying affinity for 192.168.61.192/26 host="172.31.26.215" Jan 13 20:10:26.601310 containerd[1929]: 2025-01-13 20:10:26.518 [INFO][3535] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.192/26 host="172.31.26.215" Jan 13 20:10:26.601310 containerd[1929]: 2025-01-13 20:10:26.524 [INFO][3535] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.192/26 host="172.31.26.215" Jan 13 20:10:26.601310 containerd[1929]: 2025-01-13 20:10:26.524 [INFO][3535] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.192/26 handle="k8s-pod-network.3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db" host="172.31.26.215" Jan 13 20:10:26.601310 containerd[1929]: 2025-01-13 20:10:26.527 [INFO][3535] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db Jan 13 20:10:26.601310 containerd[1929]: 2025-01-13 20:10:26.545 [INFO][3535] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.192/26 handle="k8s-pod-network.3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db" host="172.31.26.215" Jan 13 20:10:26.601310 containerd[1929]: 2025-01-13 20:10:26.557 [INFO][3535] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.193/26] block=192.168.61.192/26 handle="k8s-pod-network.3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db" host="172.31.26.215" Jan 13 20:10:26.601310 containerd[1929]: 2025-01-13 20:10:26.558 [INFO][3535] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.193/26] handle="k8s-pod-network.3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db" host="172.31.26.215" Jan 13 20:10:26.601310 containerd[1929]: 2025-01-13 20:10:26.558 [INFO][3535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:10:26.601310 containerd[1929]: 2025-01-13 20:10:26.558 [INFO][3535] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.193/26] IPv6=[] ContainerID="3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db" HandleID="k8s-pod-network.3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db" Workload="172.31.26.215-k8s-nginx--deployment--85f456d6dd--7xnjr-eth0" Jan 13 20:10:26.602664 containerd[1929]: 2025-01-13 20:10:26.564 [INFO][3506] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db" Namespace="default" Pod="nginx-deployment-85f456d6dd-7xnjr" WorkloadEndpoint="172.31.26.215-k8s-nginx--deployment--85f456d6dd--7xnjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.215-k8s-nginx--deployment--85f456d6dd--7xnjr-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"95bdcc03-7517-44ab-a81e-a674bfc95b9f", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 10, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.215", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-7xnjr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.61.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali729a1448bc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:10:26.602664 containerd[1929]: 2025-01-13 20:10:26.564 [INFO][3506] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.193/32] ContainerID="3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db" Namespace="default" Pod="nginx-deployment-85f456d6dd-7xnjr" WorkloadEndpoint="172.31.26.215-k8s-nginx--deployment--85f456d6dd--7xnjr-eth0" Jan 13 20:10:26.602664 containerd[1929]: 2025-01-13 20:10:26.564 [INFO][3506] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali729a1448bc8 ContainerID="3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db" Namespace="default" Pod="nginx-deployment-85f456d6dd-7xnjr" WorkloadEndpoint="172.31.26.215-k8s-nginx--deployment--85f456d6dd--7xnjr-eth0" Jan 13 20:10:26.602664 containerd[1929]: 2025-01-13 20:10:26.574 [INFO][3506] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db" Namespace="default" Pod="nginx-deployment-85f456d6dd-7xnjr" WorkloadEndpoint="172.31.26.215-k8s-nginx--deployment--85f456d6dd--7xnjr-eth0" Jan 13 20:10:26.602664 containerd[1929]: 2025-01-13 20:10:26.576 [INFO][3506] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db" Namespace="default" Pod="nginx-deployment-85f456d6dd-7xnjr" WorkloadEndpoint="172.31.26.215-k8s-nginx--deployment--85f456d6dd--7xnjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.215-k8s-nginx--deployment--85f456d6dd--7xnjr-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"95bdcc03-7517-44ab-a81e-a674bfc95b9f", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 10, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.215", ContainerID:"3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db", Pod:"nginx-deployment-85f456d6dd-7xnjr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.61.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali729a1448bc8", MAC:"96:a4:58:26:f4:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:10:26.602664 containerd[1929]: 2025-01-13 20:10:26.597 [INFO][3506] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db" Namespace="default" Pod="nginx-deployment-85f456d6dd-7xnjr" WorkloadEndpoint="172.31.26.215-k8s-nginx--deployment--85f456d6dd--7xnjr-eth0" Jan 13 20:10:26.636026 containerd[1929]: time="2025-01-13T20:10:26.635265481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:10:26.636026 containerd[1929]: time="2025-01-13T20:10:26.635942812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:10:26.636026 containerd[1929]: time="2025-01-13T20:10:26.635970774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:26.636661 containerd[1929]: time="2025-01-13T20:10:26.636427386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:26.668539 systemd[1]: Started cri-containerd-3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db.scope - libcontainer container 3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db. Jan 13 20:10:26.673811 systemd-networkd[1850]: calif42c9532ffa: Link UP Jan 13 20:10:26.675561 systemd-networkd[1850]: calif42c9532ffa: Gained carrier Jan 13 20:10:26.720204 kubelet[2399]: E0113 20:10:26.720134 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:26.742773 containerd[1929]: time="2025-01-13T20:10:26.742717753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-7xnjr,Uid:95bdcc03-7517-44ab-a81e-a674bfc95b9f,Namespace:default,Attempt:7,} returns sandbox id \"3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db\"" Jan 13 20:10:26.748025 containerd[1929]: time="2025-01-13T20:10:26.747581492Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:10:26.749134 containerd[1929]: 2025-01-13 20:10:26.287 [INFO][3495] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:10:26.749134 containerd[1929]: 2025-01-13 20:10:26.358 [INFO][3495] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.26.215-k8s-csi--node--driver--57qj4-eth0 csi-node-driver- calico-system eaac7307-8099-4833-a054-611983b758f2 814 0 2025-01-13 20:09:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.26.215 csi-node-driver-57qj4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif42c9532ffa [] []}} ContainerID="725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4" Namespace="calico-system" Pod="csi-node-driver-57qj4" WorkloadEndpoint="172.31.26.215-k8s-csi--node--driver--57qj4-" Jan 13 20:10:26.749134 containerd[1929]: 2025-01-13 20:10:26.358 [INFO][3495] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4" Namespace="calico-system" Pod="csi-node-driver-57qj4" WorkloadEndpoint="172.31.26.215-k8s-csi--node--driver--57qj4-eth0" Jan 13 20:10:26.749134 containerd[1929]: 2025-01-13 20:10:26.457 [INFO][3531] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4" HandleID="k8s-pod-network.725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4" Workload="172.31.26.215-k8s-csi--node--driver--57qj4-eth0" Jan 13 20:10:26.749134 containerd[1929]: 2025-01-13 20:10:26.494 [INFO][3531] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4" HandleID="k8s-pod-network.725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4" Workload="172.31.26.215-k8s-csi--node--driver--57qj4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003164e0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.26.215", "pod":"csi-node-driver-57qj4", "timestamp":"2025-01-13 20:10:26.457625509 +0000 UTC"}, Hostname:"172.31.26.215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:10:26.749134 containerd[1929]: 2025-01-13 20:10:26.494 [INFO][3531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:10:26.749134 containerd[1929]: 2025-01-13 20:10:26.558 [INFO][3531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:10:26.749134 containerd[1929]: 2025-01-13 20:10:26.558 [INFO][3531] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.26.215' Jan 13 20:10:26.749134 containerd[1929]: 2025-01-13 20:10:26.562 [INFO][3531] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4" host="172.31.26.215" Jan 13 20:10:26.749134 containerd[1929]: 2025-01-13 20:10:26.581 [INFO][3531] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.26.215" Jan 13 20:10:26.749134 containerd[1929]: 2025-01-13 20:10:26.597 [INFO][3531] ipam/ipam.go 489: Trying affinity for 192.168.61.192/26 host="172.31.26.215" Jan 13 20:10:26.749134 containerd[1929]: 2025-01-13 20:10:26.604 [INFO][3531] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.192/26 host="172.31.26.215" Jan 13 20:10:26.749134 containerd[1929]: 2025-01-13 20:10:26.610 [INFO][3531] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.192/26 host="172.31.26.215" Jan 13 20:10:26.749134 containerd[1929]: 2025-01-13 20:10:26.610 [INFO][3531] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.192/26 handle="k8s-pod-network.725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4" host="172.31.26.215" Jan 13 20:10:26.749134 containerd[1929]: 2025-01-13 20:10:26.614 [INFO][3531] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4 Jan 13 20:10:26.749134 containerd[1929]: 2025-01-13 20:10:26.624 [INFO][3531] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.192/26 handle="k8s-pod-network.725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4" host="172.31.26.215" Jan 13 20:10:26.749134 containerd[1929]: 2025-01-13 20:10:26.661 [INFO][3531] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.194/26] block=192.168.61.192/26 handle="k8s-pod-network.725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4" host="172.31.26.215" Jan 13 20:10:26.749134 containerd[1929]: 2025-01-13 20:10:26.661 [INFO][3531] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.194/26] handle="k8s-pod-network.725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4" host="172.31.26.215" Jan 13 20:10:26.749134 containerd[1929]: 2025-01-13 20:10:26.661 [INFO][3531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:10:26.749134 containerd[1929]: 2025-01-13 20:10:26.661 [INFO][3531] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.194/26] IPv6=[] ContainerID="725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4" HandleID="k8s-pod-network.725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4" Workload="172.31.26.215-k8s-csi--node--driver--57qj4-eth0" Jan 13 20:10:26.750563 containerd[1929]: 2025-01-13 20:10:26.666 [INFO][3495] cni-plugin/k8s.go 386: Populated endpoint ContainerID="725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4" Namespace="calico-system" Pod="csi-node-driver-57qj4" WorkloadEndpoint="172.31.26.215-k8s-csi--node--driver--57qj4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.215-k8s-csi--node--driver--57qj4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eaac7307-8099-4833-a054-611983b758f2", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 9, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.215", ContainerID:"", Pod:"csi-node-driver-57qj4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif42c9532ffa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:10:26.750563 containerd[1929]: 2025-01-13 20:10:26.667 [INFO][3495] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.194/32] ContainerID="725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4" Namespace="calico-system" Pod="csi-node-driver-57qj4" WorkloadEndpoint="172.31.26.215-k8s-csi--node--driver--57qj4-eth0" Jan 13 20:10:26.750563 containerd[1929]: 2025-01-13 20:10:26.667 [INFO][3495] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif42c9532ffa ContainerID="725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4" Namespace="calico-system" Pod="csi-node-driver-57qj4" WorkloadEndpoint="172.31.26.215-k8s-csi--node--driver--57qj4-eth0" Jan 13 20:10:26.750563 containerd[1929]: 2025-01-13 20:10:26.674 [INFO][3495] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4" Namespace="calico-system" Pod="csi-node-driver-57qj4" WorkloadEndpoint="172.31.26.215-k8s-csi--node--driver--57qj4-eth0" Jan 13 20:10:26.750563 containerd[1929]: 2025-01-13 20:10:26.676 [INFO][3495] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4" Namespace="calico-system" Pod="csi-node-driver-57qj4" WorkloadEndpoint="172.31.26.215-k8s-csi--node--driver--57qj4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.215-k8s-csi--node--driver--57qj4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eaac7307-8099-4833-a054-611983b758f2", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 9, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.215", ContainerID:"725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4", Pod:"csi-node-driver-57qj4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif42c9532ffa", MAC:"e6:8b:e5:13:1d:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:10:26.750563 containerd[1929]: 2025-01-13 20:10:26.745 [INFO][3495] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4" Namespace="calico-system" Pod="csi-node-driver-57qj4" WorkloadEndpoint="172.31.26.215-k8s-csi--node--driver--57qj4-eth0" Jan 13 20:10:26.783844 containerd[1929]: time="2025-01-13T20:10:26.783618630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:10:26.783844 containerd[1929]: time="2025-01-13T20:10:26.783749327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:10:26.783844 containerd[1929]: time="2025-01-13T20:10:26.783776125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:26.784331 containerd[1929]: time="2025-01-13T20:10:26.783927004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:26.811533 systemd[1]: Started cri-containerd-725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4.scope - libcontainer container 725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4. Jan 13 20:10:26.851481 containerd[1929]: time="2025-01-13T20:10:26.851303176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57qj4,Uid:eaac7307-8099-4833-a054-611983b758f2,Namespace:calico-system,Attempt:10,} returns sandbox id \"725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4\"" Jan 13 20:10:27.721243 kubelet[2399]: E0113 20:10:27.721155 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:27.921030 systemd-networkd[1850]: cali729a1448bc8: Gained IPv6LL Jan 13 20:10:27.967264 kernel: bpftool[3794]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 20:10:27.984417 systemd-networkd[1850]: calif42c9532ffa: Gained IPv6LL Jan 13 20:10:28.271421 systemd-networkd[1850]: vxlan.calico: Link UP Jan 13 20:10:28.271439 systemd-networkd[1850]: vxlan.calico: Gained carrier Jan 13 20:10:28.317098 (udev-worker)[3468]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:10:28.722086 kubelet[2399]: E0113 20:10:28.721369 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:29.713255 systemd-networkd[1850]: vxlan.calico: Gained IPv6LL Jan 13 20:10:29.722117 kubelet[2399]: E0113 20:10:29.721733 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:30.707990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount664242028.mount: Deactivated successfully. Jan 13 20:10:30.722968 kubelet[2399]: E0113 20:10:30.721875 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:31.722297 kubelet[2399]: E0113 20:10:31.722172 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:32.036348 containerd[1929]: time="2025-01-13T20:10:32.036273309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:32.038259 containerd[1929]: time="2025-01-13T20:10:32.038122365Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67697045" Jan 13 20:10:32.040618 containerd[1929]: time="2025-01-13T20:10:32.040542991Z" level=info msg="ImageCreate event name:\"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:32.045737 containerd[1929]: time="2025-01-13T20:10:32.045657235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:32.047684 containerd[1929]: time="2025-01-13T20:10:32.047494129Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 5.299854179s" Jan 13 20:10:32.047684 containerd[1929]: time="2025-01-13T20:10:32.047547015Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\"" Jan 13 20:10:32.051243 containerd[1929]: time="2025-01-13T20:10:32.050638044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 20:10:32.052549 containerd[1929]: time="2025-01-13T20:10:32.052330518Z" level=info msg="CreateContainer within sandbox \"3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 20:10:32.086128 containerd[1929]: time="2025-01-13T20:10:32.086077559Z" level=info msg="CreateContainer within sandbox \"3acf2ddcd8cfc52de09059970ef7fba8f4f25dc43fbfdec1754c9d7b1ebba0db\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"6b63718803fd935369ea55a87709e3a2a1abb832d3eca1069d2924503624cba9\"" Jan 13 20:10:32.087106 containerd[1929]: time="2025-01-13T20:10:32.087040801Z" level=info msg="StartContainer for \"6b63718803fd935369ea55a87709e3a2a1abb832d3eca1069d2924503624cba9\"" Jan 13 20:10:32.128132 ntpd[1916]: Listen normally on 8 vxlan.calico 192.168.61.192:123 Jan 13 20:10:32.128327 ntpd[1916]: Listen normally on 9 cali729a1448bc8 [fe80::ecee:eeff:feee:eeee%3]:123 Jan 13 20:10:32.128794 ntpd[1916]: 13 Jan 20:10:32 ntpd[1916]: Listen normally on 8 vxlan.calico 192.168.61.192:123 Jan 13 20:10:32.128794 ntpd[1916]: 13 Jan 20:10:32 ntpd[1916]: Listen normally on 9 cali729a1448bc8 [fe80::ecee:eeff:feee:eeee%3]:123 Jan 13 20:10:32.128794 ntpd[1916]: 13 Jan 20:10:32 ntpd[1916]: Listen normally on 10 calif42c9532ffa [fe80::ecee:eeff:feee:eeee%4]:123 Jan 13 20:10:32.128794 ntpd[1916]: 13 Jan 20:10:32 ntpd[1916]: Listen normally on 11 vxlan.calico [fe80::647d:2cff:fee4:1e91%5]:123 Jan 13 20:10:32.128414 ntpd[1916]: Listen normally on 10 calif42c9532ffa [fe80::ecee:eeff:feee:eeee%4]:123 Jan 13 20:10:32.128481 ntpd[1916]: Listen normally on 11 vxlan.calico [fe80::647d:2cff:fee4:1e91%5]:123 Jan 13 20:10:32.140529 systemd[1]: run-containerd-runc-k8s.io-6b63718803fd935369ea55a87709e3a2a1abb832d3eca1069d2924503624cba9-runc.pi7wpg.mount: Deactivated successfully. Jan 13 20:10:32.149525 systemd[1]: Started cri-containerd-6b63718803fd935369ea55a87709e3a2a1abb832d3eca1069d2924503624cba9.scope - libcontainer container 6b63718803fd935369ea55a87709e3a2a1abb832d3eca1069d2924503624cba9. Jan 13 20:10:32.204300 containerd[1929]: time="2025-01-13T20:10:32.203934723Z" level=info msg="StartContainer for \"6b63718803fd935369ea55a87709e3a2a1abb832d3eca1069d2924503624cba9\" returns successfully" Jan 13 20:10:32.723350 kubelet[2399]: E0113 20:10:32.723291 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:33.214756 kubelet[2399]: I0113 20:10:33.214686 2399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-7xnjr" podStartSLOduration=8.911330187 podStartE2EDuration="14.214667476s" podCreationTimestamp="2025-01-13 20:10:19 +0000 UTC" firstStartedPulling="2025-01-13 20:10:26.746351405 +0000 UTC m=+28.616280741" lastFinishedPulling="2025-01-13 20:10:32.049688694 +0000 UTC m=+33.919618030" observedRunningTime="2025-01-13 20:10:33.214525733 +0000 UTC m=+35.084455105" watchObservedRunningTime="2025-01-13 20:10:33.214667476 +0000 UTC m=+35.084596824" Jan 13 20:10:33.353952 containerd[1929]: time="2025-01-13T20:10:33.353886904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:33.355383 containerd[1929]: time="2025-01-13T20:10:33.355247988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 13 20:10:33.357268 containerd[1929]: time="2025-01-13T20:10:33.357169669Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:33.362031 containerd[1929]: time="2025-01-13T20:10:33.361967038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:33.364180 containerd[1929]: time="2025-01-13T20:10:33.363374021Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.312679056s" Jan 13 20:10:33.364180 containerd[1929]: time="2025-01-13T20:10:33.363434135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 13 20:10:33.366730 containerd[1929]: time="2025-01-13T20:10:33.366674110Z" level=info msg="CreateContainer within sandbox \"725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 20:10:33.392392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1626360605.mount: Deactivated successfully. Jan 13 20:10:33.394441 containerd[1929]: time="2025-01-13T20:10:33.394270250Z" level=info msg="CreateContainer within sandbox \"725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f93da031f9a8b5a27cf9d6a5f04589f4031941b218646b6f7807d18db9fc3176\"" Jan 13 20:10:33.395804 containerd[1929]: time="2025-01-13T20:10:33.395094990Z" level=info msg="StartContainer for \"f93da031f9a8b5a27cf9d6a5f04589f4031941b218646b6f7807d18db9fc3176\"" Jan 13 20:10:33.443845 systemd[1]: run-containerd-runc-k8s.io-f93da031f9a8b5a27cf9d6a5f04589f4031941b218646b6f7807d18db9fc3176-runc.r2T9bN.mount: Deactivated successfully. Jan 13 20:10:33.453630 systemd[1]: Started cri-containerd-f93da031f9a8b5a27cf9d6a5f04589f4031941b218646b6f7807d18db9fc3176.scope - libcontainer container f93da031f9a8b5a27cf9d6a5f04589f4031941b218646b6f7807d18db9fc3176. Jan 13 20:10:33.517397 containerd[1929]: time="2025-01-13T20:10:33.517331739Z" level=info msg="StartContainer for \"f93da031f9a8b5a27cf9d6a5f04589f4031941b218646b6f7807d18db9fc3176\" returns successfully" Jan 13 20:10:33.519320 containerd[1929]: time="2025-01-13T20:10:33.519132483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 20:10:33.723831 kubelet[2399]: E0113 20:10:33.723770 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:34.666391 update_engine[1923]: I20250113 20:10:34.666313 1923 update_attempter.cc:509] Updating boot flags... Jan 13 20:10:34.725812 kubelet[2399]: E0113 20:10:34.724204 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:34.771551 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (4019) Jan 13 20:10:35.149327 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (4022) Jan 13 20:10:35.252649 containerd[1929]: time="2025-01-13T20:10:35.251192000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:35.256702 containerd[1929]: time="2025-01-13T20:10:35.256628569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 13 20:10:35.264401 containerd[1929]: time="2025-01-13T20:10:35.264313921Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:35.284120 containerd[1929]: time="2025-01-13T20:10:35.283528048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:35.292806 containerd[1929]: time="2025-01-13T20:10:35.292536138Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.773243867s" Jan 13 20:10:35.292806 containerd[1929]: time="2025-01-13T20:10:35.292622101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 13 20:10:35.302304 containerd[1929]: time="2025-01-13T20:10:35.302171925Z" level=info msg="CreateContainer within sandbox \"725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 20:10:35.353342 containerd[1929]: time="2025-01-13T20:10:35.352576199Z" level=info msg="CreateContainer within sandbox \"725efdec0be0b63bc6dfb0391066e6d1c79d6e3e7a2cb58ec33776014b7037d4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cfaee4d3e52cff591126ebc5331b9b0b0927e547ca6e463cf793b2a0b7365775\"" Jan 13 20:10:35.353505 containerd[1929]: time="2025-01-13T20:10:35.353334594Z" level=info msg="StartContainer for \"cfaee4d3e52cff591126ebc5331b9b0b0927e547ca6e463cf793b2a0b7365775\"" Jan 13 20:10:35.469548 systemd[1]: Started cri-containerd-cfaee4d3e52cff591126ebc5331b9b0b0927e547ca6e463cf793b2a0b7365775.scope - libcontainer container cfaee4d3e52cff591126ebc5331b9b0b0927e547ca6e463cf793b2a0b7365775. Jan 13 20:10:35.530442 containerd[1929]: time="2025-01-13T20:10:35.530372538Z" level=info msg="StartContainer for \"cfaee4d3e52cff591126ebc5331b9b0b0927e547ca6e463cf793b2a0b7365775\" returns successfully" Jan 13 20:10:35.725305 kubelet[2399]: E0113 20:10:35.725117 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:35.873361 kubelet[2399]: I0113 20:10:35.873313 2399 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 20:10:35.873361 kubelet[2399]: I0113 20:10:35.873364 2399 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 20:10:36.264051 kubelet[2399]: I0113 20:10:36.263956 2399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-57qj4" podStartSLOduration=28.818124164 podStartE2EDuration="37.26393697s" podCreationTimestamp="2025-01-13 20:09:59 +0000 UTC" firstStartedPulling="2025-01-13 20:10:26.853996158 +0000 UTC m=+28.723925518" lastFinishedPulling="2025-01-13 20:10:35.299808976 +0000 UTC m=+37.169738324" observedRunningTime="2025-01-13 20:10:36.263861308 +0000 UTC m=+38.133790680" watchObservedRunningTime="2025-01-13 20:10:36.26393697 +0000 UTC m=+38.133866318" Jan 13 20:10:36.725743 kubelet[2399]: E0113 20:10:36.725683 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:37.726384 kubelet[2399]: E0113 20:10:37.726314 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:38.726790 kubelet[2399]: E0113 20:10:38.726733 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:39.701073 kubelet[2399]: E0113 20:10:39.701003 2399 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:39.727593 kubelet[2399]: E0113 20:10:39.727527 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:40.728420 kubelet[2399]: E0113 20:10:40.728359 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:41.000591 kubelet[2399]: I0113 20:10:41.000355 2399 topology_manager.go:215] "Topology Admit Handler" podUID="2b6b57a7-3103-4e00-b1d6-4bb06382af4f" podNamespace="default" podName="nfs-server-provisioner-0" Jan 13 20:10:41.012628 systemd[1]: Created slice kubepods-besteffort-pod2b6b57a7_3103_4e00_b1d6_4bb06382af4f.slice - libcontainer container kubepods-besteffort-pod2b6b57a7_3103_4e00_b1d6_4bb06382af4f.slice. Jan 13 20:10:41.111692 kubelet[2399]: I0113 20:10:41.111610 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfx2x\" (UniqueName: \"kubernetes.io/projected/2b6b57a7-3103-4e00-b1d6-4bb06382af4f-kube-api-access-lfx2x\") pod \"nfs-server-provisioner-0\" (UID: \"2b6b57a7-3103-4e00-b1d6-4bb06382af4f\") " pod="default/nfs-server-provisioner-0" Jan 13 20:10:41.111692 kubelet[2399]: I0113 20:10:41.111679 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/2b6b57a7-3103-4e00-b1d6-4bb06382af4f-data\") pod \"nfs-server-provisioner-0\" (UID: \"2b6b57a7-3103-4e00-b1d6-4bb06382af4f\") " pod="default/nfs-server-provisioner-0" Jan 13 20:10:41.318753 containerd[1929]: time="2025-01-13T20:10:41.318621100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:2b6b57a7-3103-4e00-b1d6-4bb06382af4f,Namespace:default,Attempt:0,}" Jan 13 20:10:41.594020 systemd-networkd[1850]: cali60e51b789ff: Link UP Jan 13 20:10:41.596093 systemd-networkd[1850]: cali60e51b789ff: Gained carrier Jan 13 20:10:41.602400 (udev-worker)[4264]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:10:41.621029 containerd[1929]: 2025-01-13 20:10:41.406 [INFO][4245] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.26.215-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 2b6b57a7-3103-4e00-b1d6-4bb06382af4f 1185 0 2025-01-13 20:10:41 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.26.215 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.215-k8s-nfs--server--provisioner--0-" Jan 13 20:10:41.621029 containerd[1929]: 2025-01-13 20:10:41.406 [INFO][4245] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.215-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:10:41.621029 containerd[1929]: 2025-01-13 20:10:41.453 [INFO][4257] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71" HandleID="k8s-pod-network.2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71" Workload="172.31.26.215-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:10:41.621029 containerd[1929]: 2025-01-13 20:10:41.491 [INFO][4257] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71" HandleID="k8s-pod-network.2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71" Workload="172.31.26.215-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000332d90), Attrs:map[string]string{"namespace":"default", "node":"172.31.26.215", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-13 20:10:41.453011319 +0000 UTC"}, Hostname:"172.31.26.215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:10:41.621029 containerd[1929]: 2025-01-13 20:10:41.491 [INFO][4257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:10:41.621029 containerd[1929]: 2025-01-13 20:10:41.491 [INFO][4257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:10:41.621029 containerd[1929]: 2025-01-13 20:10:41.491 [INFO][4257] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.26.215' Jan 13 20:10:41.621029 containerd[1929]: 2025-01-13 20:10:41.497 [INFO][4257] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71" host="172.31.26.215" Jan 13 20:10:41.621029 containerd[1929]: 2025-01-13 20:10:41.507 [INFO][4257] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.26.215" Jan 13 20:10:41.621029 containerd[1929]: 2025-01-13 20:10:41.521 [INFO][4257] ipam/ipam.go 489: Trying affinity for 192.168.61.192/26 host="172.31.26.215" Jan 13 20:10:41.621029 containerd[1929]: 2025-01-13 20:10:41.533 [INFO][4257] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.192/26 host="172.31.26.215" Jan 13 20:10:41.621029 containerd[1929]: 2025-01-13 20:10:41.540 [INFO][4257] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.192/26 host="172.31.26.215" Jan 13 20:10:41.621029 containerd[1929]: 2025-01-13 20:10:41.540 [INFO][4257] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.192/26 handle="k8s-pod-network.2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71" host="172.31.26.215" Jan 13 20:10:41.621029 containerd[1929]: 2025-01-13 20:10:41.551 [INFO][4257] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71 Jan 13 20:10:41.621029 containerd[1929]: 2025-01-13 20:10:41.561 [INFO][4257] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.192/26 handle="k8s-pod-network.2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71" host="172.31.26.215" Jan 13 20:10:41.621029 containerd[1929]: 2025-01-13 20:10:41.585 [INFO][4257] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.195/26] block=192.168.61.192/26 handle="k8s-pod-network.2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71" host="172.31.26.215" Jan 13 20:10:41.621029 containerd[1929]: 2025-01-13 20:10:41.585 [INFO][4257] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.195/26] handle="k8s-pod-network.2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71" host="172.31.26.215" Jan 13 20:10:41.621029 containerd[1929]: 2025-01-13 20:10:41.586 [INFO][4257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:10:41.621029 containerd[1929]: 2025-01-13 20:10:41.586 [INFO][4257] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.195/26] IPv6=[] ContainerID="2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71" HandleID="k8s-pod-network.2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71" Workload="172.31.26.215-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:10:41.623619 containerd[1929]: 2025-01-13 20:10:41.588 [INFO][4245] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.215-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.215-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"2b6b57a7-3103-4e00-b1d6-4bb06382af4f", ResourceVersion:"1185", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.215", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.61.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:10:41.623619 containerd[1929]: 2025-01-13 20:10:41.589 [INFO][4245] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.195/32] ContainerID="2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.215-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:10:41.623619 containerd[1929]: 2025-01-13 20:10:41.589 [INFO][4245] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.215-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:10:41.623619 containerd[1929]: 2025-01-13 20:10:41.592 [INFO][4245] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.215-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:10:41.624126 containerd[1929]: 2025-01-13 20:10:41.595 [INFO][4245] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.215-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.215-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"2b6b57a7-3103-4e00-b1d6-4bb06382af4f", ResourceVersion:"1185", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.215", ContainerID:"2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.61.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"3e:7f:f6:48:a9:0a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:10:41.624126 containerd[1929]: 2025-01-13 20:10:41.618 [INFO][4245] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.215-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:10:41.659821 containerd[1929]: time="2025-01-13T20:10:41.658842748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:10:41.659821 containerd[1929]: time="2025-01-13T20:10:41.658960263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:10:41.659821 containerd[1929]: time="2025-01-13T20:10:41.658997158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:41.659821 containerd[1929]: time="2025-01-13T20:10:41.659149898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:41.707531 systemd[1]: Started cri-containerd-2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71.scope - libcontainer container 2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71. Jan 13 20:10:41.729526 kubelet[2399]: E0113 20:10:41.729461 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:41.766370 containerd[1929]: time="2025-01-13T20:10:41.765913830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:2b6b57a7-3103-4e00-b1d6-4bb06382af4f,Namespace:default,Attempt:0,} returns sandbox id \"2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71\"" Jan 13 20:10:41.769298 containerd[1929]: time="2025-01-13T20:10:41.769189007Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 20:10:42.730372 kubelet[2399]: E0113 20:10:42.730314 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:43.601678 systemd-networkd[1850]: cali60e51b789ff: Gained IPv6LL Jan 13 20:10:43.730947 kubelet[2399]: E0113 20:10:43.730613 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:44.353791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1772452811.mount: Deactivated successfully. Jan 13 20:10:44.731361 kubelet[2399]: E0113 20:10:44.730844 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:45.731561 kubelet[2399]: E0113 20:10:45.731482 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:46.128195 ntpd[1916]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 20:10:46.129713 ntpd[1916]: 13 Jan 20:10:46 ntpd[1916]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 20:10:46.731683 kubelet[2399]: E0113 20:10:46.731618 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:47.113087 containerd[1929]: time="2025-01-13T20:10:47.113031022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:47.116262 containerd[1929]: time="2025-01-13T20:10:47.116162463Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Jan 13 20:10:47.118023 containerd[1929]: time="2025-01-13T20:10:47.117865334Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:47.122526 containerd[1929]: time="2025-01-13T20:10:47.122474425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:47.124804 containerd[1929]: time="2025-01-13T20:10:47.124592452Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.355285786s" Jan 13 20:10:47.124804 containerd[1929]: time="2025-01-13T20:10:47.124651353Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 13 20:10:47.129652 containerd[1929]: time="2025-01-13T20:10:47.129482832Z" level=info msg="CreateContainer within sandbox \"2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 20:10:47.163455 containerd[1929]: time="2025-01-13T20:10:47.163252168Z" level=info msg="CreateContainer within sandbox \"2b1eabcdfb6155d299a1ec3ecbd22697e2fa19f579d00c5f9b308a622e86cb71\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"eb293d59535846a7727e02c3489a8ac510bb821d12ed8abaa6b1484346eaa40d\"" Jan 13 20:10:47.165569 containerd[1929]: time="2025-01-13T20:10:47.164632630Z" level=info msg="StartContainer for \"eb293d59535846a7727e02c3489a8ac510bb821d12ed8abaa6b1484346eaa40d\"" Jan 13 20:10:47.222494 systemd[1]: Started cri-containerd-eb293d59535846a7727e02c3489a8ac510bb821d12ed8abaa6b1484346eaa40d.scope - libcontainer container eb293d59535846a7727e02c3489a8ac510bb821d12ed8abaa6b1484346eaa40d. Jan 13 20:10:47.269842 containerd[1929]: time="2025-01-13T20:10:47.269175428Z" level=info msg="StartContainer for \"eb293d59535846a7727e02c3489a8ac510bb821d12ed8abaa6b1484346eaa40d\" returns successfully" Jan 13 20:10:47.732283 kubelet[2399]: E0113 20:10:47.732191 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:48.296893 kubelet[2399]: I0113 20:10:48.296798 2399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.938765751 podStartE2EDuration="7.296776863s" podCreationTimestamp="2025-01-13 20:10:41 +0000 UTC" firstStartedPulling="2025-01-13 20:10:41.768561681 +0000 UTC m=+43.638491029" lastFinishedPulling="2025-01-13 20:10:47.126572793 +0000 UTC m=+48.996502141" observedRunningTime="2025-01-13 20:10:48.296587805 +0000 UTC m=+50.166517201" watchObservedRunningTime="2025-01-13 20:10:48.296776863 +0000 UTC m=+50.166706199" Jan 13 20:10:48.732801 kubelet[2399]: E0113 20:10:48.732656 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:49.733725 kubelet[2399]: E0113 20:10:49.733665 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:50.734200 kubelet[2399]: E0113 20:10:50.734127 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:51.734364 kubelet[2399]: E0113 20:10:51.734302 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:52.735490 kubelet[2399]: E0113 20:10:52.735426 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:53.736349 kubelet[2399]: E0113 20:10:53.736288 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:54.736884 kubelet[2399]: E0113 20:10:54.736826 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:55.737159 kubelet[2399]: E0113 20:10:55.737105 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:56.737623 kubelet[2399]: E0113 20:10:56.737552 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:57.738112 kubelet[2399]: E0113 20:10:57.738052 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:58.738584 kubelet[2399]: E0113 20:10:58.738526 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:59.700508 kubelet[2399]: E0113 20:10:59.700447 2399 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:59.739329 kubelet[2399]: E0113 20:10:59.739256 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:59.749177 containerd[1929]: time="2025-01-13T20:10:59.748936873Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\"" Jan 13 20:10:59.749177 containerd[1929]: time="2025-01-13T20:10:59.749132799Z" level=info msg="TearDown network for sandbox \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" successfully" Jan 13 20:10:59.749177 containerd[1929]: time="2025-01-13T20:10:59.749156091Z" level=info msg="StopPodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" returns successfully" Jan 13 20:10:59.750513 containerd[1929]: time="2025-01-13T20:10:59.749820767Z" level=info msg="RemovePodSandbox for \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\"" Jan 13 20:10:59.750513 containerd[1929]: time="2025-01-13T20:10:59.749867638Z" level=info msg="Forcibly stopping sandbox \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\"" Jan 13 20:10:59.750513 containerd[1929]: time="2025-01-13T20:10:59.750006728Z" level=info msg="TearDown network for sandbox \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" successfully" Jan 13 20:10:59.755048 containerd[1929]: time="2025-01-13T20:10:59.754979769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:10:59.755179 containerd[1929]: time="2025-01-13T20:10:59.755063703Z" level=info msg="RemovePodSandbox \"d501709958530a23a3470ef8810c5cc069743d1cc0d92e61371dc0ca4a9d4987\" returns successfully" Jan 13 20:10:59.756307 containerd[1929]: time="2025-01-13T20:10:59.755811965Z" level=info msg="StopPodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\"" Jan 13 20:10:59.756307 containerd[1929]: time="2025-01-13T20:10:59.755968908Z" level=info msg="TearDown network for sandbox \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" successfully" Jan 13 20:10:59.756307 containerd[1929]: time="2025-01-13T20:10:59.755990602Z" level=info msg="StopPodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" returns successfully" Jan 13 20:10:59.757090 containerd[1929]: time="2025-01-13T20:10:59.756723893Z" level=info msg="RemovePodSandbox for \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\"" Jan 13 20:10:59.757090 containerd[1929]: time="2025-01-13T20:10:59.756766310Z" level=info msg="Forcibly stopping sandbox \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\"" Jan 13 20:10:59.757090 containerd[1929]: time="2025-01-13T20:10:59.756884713Z" level=info msg="TearDown network for sandbox \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" successfully" Jan 13 20:10:59.761442 containerd[1929]: time="2025-01-13T20:10:59.761379748Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:10:59.761871 containerd[1929]: time="2025-01-13T20:10:59.761460140Z" level=info msg="RemovePodSandbox \"ed6011aa25fb5d2c7ce65a87c06956ed01eee7e17b98ebed3513eff97068d3c8\" returns successfully" Jan 13 20:10:59.762685 containerd[1929]: time="2025-01-13T20:10:59.762403811Z" level=info msg="StopPodSandbox for \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\"" Jan 13 20:10:59.762685 containerd[1929]: time="2025-01-13T20:10:59.762555459Z" level=info msg="TearDown network for sandbox \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\" successfully" Jan 13 20:10:59.762685 containerd[1929]: time="2025-01-13T20:10:59.762576650Z" level=info msg="StopPodSandbox for \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\" returns successfully" Jan 13 20:10:59.763440 containerd[1929]: time="2025-01-13T20:10:59.763082018Z" level=info msg="RemovePodSandbox for \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\"" Jan 13 20:10:59.763440 containerd[1929]: time="2025-01-13T20:10:59.763125912Z" level=info msg="Forcibly stopping sandbox \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\"" Jan 13 20:10:59.763440 containerd[1929]: time="2025-01-13T20:10:59.763306050Z" level=info msg="TearDown network for sandbox \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\" successfully" Jan 13 20:10:59.767507 containerd[1929]: time="2025-01-13T20:10:59.767454279Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:10:59.767804 containerd[1929]: time="2025-01-13T20:10:59.767533783Z" level=info msg="RemovePodSandbox \"f716f78d0ce8a68c04d9e821c0f8acc2077b38e6beb0c3203bdea001bd57967b\" returns successfully" Jan 13 20:10:59.768409 containerd[1929]: time="2025-01-13T20:10:59.768294327Z" level=info msg="StopPodSandbox for \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\"" Jan 13 20:10:59.768515 containerd[1929]: time="2025-01-13T20:10:59.768458557Z" level=info msg="TearDown network for sandbox \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\" successfully" Jan 13 20:10:59.768515 containerd[1929]: time="2025-01-13T20:10:59.768483818Z" level=info msg="StopPodSandbox for \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\" returns successfully" Jan 13 20:10:59.770786 containerd[1929]: time="2025-01-13T20:10:59.769367099Z" level=info msg="RemovePodSandbox for \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\"" Jan 13 20:10:59.770786 containerd[1929]: time="2025-01-13T20:10:59.769408868Z" level=info msg="Forcibly stopping sandbox \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\"" Jan 13 20:10:59.770786 containerd[1929]: time="2025-01-13T20:10:59.769527235Z" level=info msg="TearDown network for sandbox \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\" successfully" Jan 13 20:10:59.775440 containerd[1929]: time="2025-01-13T20:10:59.774811628Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:10:59.775440 containerd[1929]: time="2025-01-13T20:10:59.774893449Z" level=info msg="RemovePodSandbox \"6011edb5060ed794b1846b6d5cb3234aa80d553c627f60e3327bc143d7d54d03\" returns successfully" Jan 13 20:10:59.775739 containerd[1929]: time="2025-01-13T20:10:59.775587948Z" level=info msg="StopPodSandbox for \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\"" Jan 13 20:10:59.775800 containerd[1929]: time="2025-01-13T20:10:59.775747232Z" level=info msg="TearDown network for sandbox \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\" successfully" Jan 13 20:10:59.775800 containerd[1929]: time="2025-01-13T20:10:59.775769671Z" level=info msg="StopPodSandbox for \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\" returns successfully" Jan 13 20:10:59.776782 containerd[1929]: time="2025-01-13T20:10:59.776734065Z" level=info msg="RemovePodSandbox for \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\"" Jan 13 20:10:59.776782 containerd[1929]: time="2025-01-13T20:10:59.776786015Z" level=info msg="Forcibly stopping sandbox \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\"" Jan 13 20:10:59.776974 containerd[1929]: time="2025-01-13T20:10:59.776914287Z" level=info msg="TearDown network for sandbox \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\" successfully" Jan 13 20:10:59.783657 containerd[1929]: time="2025-01-13T20:10:59.783593429Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:10:59.784121 containerd[1929]: time="2025-01-13T20:10:59.783669187Z" level=info msg="RemovePodSandbox \"caf188fe0c6f7b883a6e63be87093f22e12a9f33387a3e1b1d8cf13cc1c10601\" returns successfully" Jan 13 20:10:59.784817 containerd[1929]: time="2025-01-13T20:10:59.784423740Z" level=info msg="StopPodSandbox for \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\"" Jan 13 20:10:59.784817 containerd[1929]: time="2025-01-13T20:10:59.784573383Z" level=info msg="TearDown network for sandbox \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\" successfully" Jan 13 20:10:59.784817 containerd[1929]: time="2025-01-13T20:10:59.784595810Z" level=info msg="StopPodSandbox for \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\" returns successfully" Jan 13 20:10:59.786523 containerd[1929]: time="2025-01-13T20:10:59.785576712Z" level=info msg="RemovePodSandbox for \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\"" Jan 13 20:10:59.786523 containerd[1929]: time="2025-01-13T20:10:59.785633753Z" level=info msg="Forcibly stopping sandbox \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\"" Jan 13 20:10:59.786523 containerd[1929]: time="2025-01-13T20:10:59.785759900Z" level=info msg="TearDown network for sandbox \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\" successfully" Jan 13 20:10:59.790444 containerd[1929]: time="2025-01-13T20:10:59.790378992Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:10:59.790560 containerd[1929]: time="2025-01-13T20:10:59.790455014Z" level=info msg="RemovePodSandbox \"b901a04ef4276ce4ccf20b5aa6b27c223c64f7047b395791fca16f24f7172b1d\" returns successfully" Jan 13 20:10:59.791711 containerd[1929]: time="2025-01-13T20:10:59.791301977Z" level=info msg="StopPodSandbox for \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\"" Jan 13 20:10:59.791711 containerd[1929]: time="2025-01-13T20:10:59.791461909Z" level=info msg="TearDown network for sandbox \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\" successfully" Jan 13 20:10:59.791711 containerd[1929]: time="2025-01-13T20:10:59.791483340Z" level=info msg="StopPodSandbox for \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\" returns successfully" Jan 13 20:10:59.792275 containerd[1929]: time="2025-01-13T20:10:59.792184551Z" level=info msg="RemovePodSandbox for \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\"" Jan 13 20:10:59.792348 containerd[1929]: time="2025-01-13T20:10:59.792283816Z" level=info msg="Forcibly stopping sandbox \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\"" Jan 13 20:10:59.792437 containerd[1929]: time="2025-01-13T20:10:59.792406842Z" level=info msg="TearDown network for sandbox \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\" successfully" Jan 13 20:10:59.799419 containerd[1929]: time="2025-01-13T20:10:59.799353561Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:10:59.799552 containerd[1929]: time="2025-01-13T20:10:59.799433377Z" level=info msg="RemovePodSandbox \"1313813316b954f72d61081d0aa2388afcc93d75fe64d5d6376d6e9b4a34af49\" returns successfully" Jan 13 20:10:59.800386 containerd[1929]: time="2025-01-13T20:10:59.800105593Z" level=info msg="StopPodSandbox for \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\"" Jan 13 20:10:59.800386 containerd[1929]: time="2025-01-13T20:10:59.800307690Z" level=info msg="TearDown network for sandbox \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\" successfully" Jan 13 20:10:59.800386 containerd[1929]: time="2025-01-13T20:10:59.800330802Z" level=info msg="StopPodSandbox for \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\" returns successfully" Jan 13 20:10:59.801346 containerd[1929]: time="2025-01-13T20:10:59.801292290Z" level=info msg="RemovePodSandbox for \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\"" Jan 13 20:10:59.801473 containerd[1929]: time="2025-01-13T20:10:59.801348238Z" level=info msg="Forcibly stopping sandbox \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\"" Jan 13 20:10:59.801526 containerd[1929]: time="2025-01-13T20:10:59.801473317Z" level=info msg="TearDown network for sandbox \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\" successfully" Jan 13 20:10:59.805846 containerd[1929]: time="2025-01-13T20:10:59.805781874Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:10:59.805970 containerd[1929]: time="2025-01-13T20:10:59.805897300Z" level=info msg="RemovePodSandbox \"7ea18355b914317698b8937df34e1288814e3c23f625f4684707fdb425175162\" returns successfully" Jan 13 20:10:59.806576 containerd[1929]: time="2025-01-13T20:10:59.806481283Z" level=info msg="StopPodSandbox for \"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\"" Jan 13 20:10:59.806665 containerd[1929]: time="2025-01-13T20:10:59.806633039Z" level=info msg="TearDown network for sandbox \"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\" successfully" Jan 13 20:10:59.806665 containerd[1929]: time="2025-01-13T20:10:59.806655935Z" level=info msg="StopPodSandbox for \"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\" returns successfully" Jan 13 20:10:59.807723 containerd[1929]: time="2025-01-13T20:10:59.807431967Z" level=info msg="RemovePodSandbox for \"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\"" Jan 13 20:10:59.807723 containerd[1929]: time="2025-01-13T20:10:59.807475560Z" level=info msg="Forcibly stopping sandbox \"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\"" Jan 13 20:10:59.807723 containerd[1929]: time="2025-01-13T20:10:59.807601431Z" level=info msg="TearDown network for sandbox \"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\" successfully" Jan 13 20:10:59.812324 containerd[1929]: time="2025-01-13T20:10:59.812265774Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:10:59.812453 containerd[1929]: time="2025-01-13T20:10:59.812381512Z" level=info msg="RemovePodSandbox \"7290fcc3ade652ad2a5102920795972f0cea77dde835b832c624e44262fcc052\" returns successfully" Jan 13 20:10:59.813331 containerd[1929]: time="2025-01-13T20:10:59.812968509Z" level=info msg="StopPodSandbox for \"a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad\"" Jan 13 20:10:59.813331 containerd[1929]: time="2025-01-13T20:10:59.813135801Z" level=info msg="TearDown network for sandbox \"a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad\" successfully" Jan 13 20:10:59.813331 containerd[1929]: time="2025-01-13T20:10:59.813158036Z" level=info msg="StopPodSandbox for \"a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad\" returns successfully" Jan 13 20:10:59.814714 containerd[1929]: time="2025-01-13T20:10:59.814005684Z" level=info msg="RemovePodSandbox for \"a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad\"" Jan 13 20:10:59.814714 containerd[1929]: time="2025-01-13T20:10:59.814055197Z" level=info msg="Forcibly stopping sandbox \"a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad\"" Jan 13 20:10:59.814714 containerd[1929]: time="2025-01-13T20:10:59.814192101Z" level=info msg="TearDown network for sandbox \"a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad\" successfully" Jan 13 20:10:59.823163 containerd[1929]: time="2025-01-13T20:10:59.822937379Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:10:59.823163 containerd[1929]: time="2025-01-13T20:10:59.823016115Z" level=info msg="RemovePodSandbox \"a914ae68d351f99c40e45a63bfd08dcdb4b6c15cbb456277aaed47f3493b8bad\" returns successfully" Jan 13 20:10:59.823925 containerd[1929]: time="2025-01-13T20:10:59.823867712Z" level=info msg="StopPodSandbox for \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\"" Jan 13 20:10:59.824087 containerd[1929]: time="2025-01-13T20:10:59.824042052Z" level=info msg="TearDown network for sandbox \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\" successfully" Jan 13 20:10:59.824087 containerd[1929]: time="2025-01-13T20:10:59.824077385Z" level=info msg="StopPodSandbox for \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\" returns successfully" Jan 13 20:10:59.824613 containerd[1929]: time="2025-01-13T20:10:59.824559534Z" level=info msg="RemovePodSandbox for \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\"" Jan 13 20:10:59.824613 containerd[1929]: time="2025-01-13T20:10:59.824607090Z" level=info msg="Forcibly stopping sandbox \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\"" Jan 13 20:10:59.824871 containerd[1929]: time="2025-01-13T20:10:59.824812224Z" level=info msg="TearDown network for sandbox \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\" successfully" Jan 13 20:10:59.862050 containerd[1929]: time="2025-01-13T20:10:59.861993222Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:10:59.862693 containerd[1929]: time="2025-01-13T20:10:59.862556435Z" level=info msg="RemovePodSandbox \"8f07845f46e4329c936d097147f6a3f7c3239c6b8171fb0cf831ec05b3e65776\" returns successfully" Jan 13 20:10:59.863355 containerd[1929]: time="2025-01-13T20:10:59.863041418Z" level=info msg="StopPodSandbox for \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\"" Jan 13 20:10:59.863355 containerd[1929]: time="2025-01-13T20:10:59.863196919Z" level=info msg="TearDown network for sandbox \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\" successfully" Jan 13 20:10:59.863355 containerd[1929]: time="2025-01-13T20:10:59.863253768Z" level=info msg="StopPodSandbox for \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\" returns successfully" Jan 13 20:10:59.864557 containerd[1929]: time="2025-01-13T20:10:59.864491142Z" level=info msg="RemovePodSandbox for \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\"" Jan 13 20:10:59.864557 containerd[1929]: time="2025-01-13T20:10:59.864552349Z" level=info msg="Forcibly stopping sandbox \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\"" Jan 13 20:10:59.864762 containerd[1929]: time="2025-01-13T20:10:59.864687849Z" level=info msg="TearDown network for sandbox \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\" successfully" Jan 13 20:10:59.872693 containerd[1929]: time="2025-01-13T20:10:59.872603380Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:10:59.872693 containerd[1929]: time="2025-01-13T20:10:59.872688371Z" level=info msg="RemovePodSandbox \"9c63fefb20b293d21d672d95dd77324c65c6e7d788a4acec1a0c56b8cd20a215\" returns successfully" Jan 13 20:10:59.873652 containerd[1929]: time="2025-01-13T20:10:59.873559418Z" level=info msg="StopPodSandbox for \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\"" Jan 13 20:10:59.873824 containerd[1929]: time="2025-01-13T20:10:59.873739988Z" level=info msg="TearDown network for sandbox \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\" successfully" Jan 13 20:10:59.873824 containerd[1929]: time="2025-01-13T20:10:59.873764145Z" level=info msg="StopPodSandbox for \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\" returns successfully" Jan 13 20:10:59.875537 containerd[1929]: time="2025-01-13T20:10:59.874646477Z" level=info msg="RemovePodSandbox for \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\"" Jan 13 20:10:59.875537 containerd[1929]: time="2025-01-13T20:10:59.874689807Z" level=info msg="Forcibly stopping sandbox \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\"" Jan 13 20:10:59.875537 containerd[1929]: time="2025-01-13T20:10:59.874818379Z" level=info msg="TearDown network for sandbox \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\" successfully" Jan 13 20:10:59.879378 containerd[1929]: time="2025-01-13T20:10:59.879325060Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:10:59.879378 containerd[1929]: time="2025-01-13T20:10:59.879414324Z" level=info msg="RemovePodSandbox \"a58f84970aadd22f2d8f30a444c11c2a4cc6111f2793968399e4353d73132ddd\" returns successfully" Jan 13 20:10:59.880360 containerd[1929]: time="2025-01-13T20:10:59.879952097Z" level=info msg="StopPodSandbox for \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\"" Jan 13 20:10:59.880360 containerd[1929]: time="2025-01-13T20:10:59.880110516Z" level=info msg="TearDown network for sandbox \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\" successfully" Jan 13 20:10:59.880360 containerd[1929]: time="2025-01-13T20:10:59.880131515Z" level=info msg="StopPodSandbox for \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\" returns successfully" Jan 13 20:10:59.880740 containerd[1929]: time="2025-01-13T20:10:59.880707791Z" level=info msg="RemovePodSandbox for \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\"" Jan 13 20:10:59.880862 containerd[1929]: time="2025-01-13T20:10:59.880835715Z" level=info msg="Forcibly stopping sandbox \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\"" Jan 13 20:10:59.881072 containerd[1929]: time="2025-01-13T20:10:59.881044235Z" level=info msg="TearDown network for sandbox \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\" successfully" Jan 13 20:10:59.886252 containerd[1929]: time="2025-01-13T20:10:59.886181278Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:10:59.886440 containerd[1929]: time="2025-01-13T20:10:59.886409560Z" level=info msg="RemovePodSandbox \"9cb00660bf655240bfff2cfe4cee2c6cdeff07839960cd528ae0225727a7eaa3\" returns successfully" Jan 13 20:10:59.887131 containerd[1929]: time="2025-01-13T20:10:59.887071295Z" level=info msg="StopPodSandbox for \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\"" Jan 13 20:10:59.887308 containerd[1929]: time="2025-01-13T20:10:59.887268037Z" level=info msg="TearDown network for sandbox \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\" successfully" Jan 13 20:10:59.887406 containerd[1929]: time="2025-01-13T20:10:59.887301198Z" level=info msg="StopPodSandbox for \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\" returns successfully" Jan 13 20:10:59.888030 containerd[1929]: time="2025-01-13T20:10:59.887976487Z" level=info msg="RemovePodSandbox for \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\"" Jan 13 20:10:59.888719 containerd[1929]: time="2025-01-13T20:10:59.888203857Z" level=info msg="Forcibly stopping sandbox \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\"" Jan 13 20:10:59.888719 containerd[1929]: time="2025-01-13T20:10:59.888365506Z" level=info msg="TearDown network for sandbox \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\" successfully" Jan 13 20:10:59.891666 containerd[1929]: time="2025-01-13T20:10:59.891605325Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:10:59.891911 containerd[1929]: time="2025-01-13T20:10:59.891680794Z" level=info msg="RemovePodSandbox \"718d3125318895e4627326a936512de6c51be51afda0747c67fdd30367f9363d\" returns successfully" Jan 13 20:10:59.892965 containerd[1929]: time="2025-01-13T20:10:59.892696094Z" level=info msg="StopPodSandbox for \"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\"" Jan 13 20:10:59.892965 containerd[1929]: time="2025-01-13T20:10:59.892853985Z" level=info msg="TearDown network for sandbox \"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\" successfully" Jan 13 20:10:59.892965 containerd[1929]: time="2025-01-13T20:10:59.892874959Z" level=info msg="StopPodSandbox for \"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\" returns successfully" Jan 13 20:10:59.894173 containerd[1929]: time="2025-01-13T20:10:59.893899215Z" level=info msg="RemovePodSandbox for \"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\"" Jan 13 20:10:59.894173 containerd[1929]: time="2025-01-13T20:10:59.893968802Z" level=info msg="Forcibly stopping sandbox \"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\"" Jan 13 20:10:59.894173 containerd[1929]: time="2025-01-13T20:10:59.894092644Z" level=info msg="TearDown network for sandbox \"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\" successfully" Jan 13 20:10:59.898005 containerd[1929]: time="2025-01-13T20:10:59.897795799Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:10:59.898005 containerd[1929]: time="2025-01-13T20:10:59.897873417Z" level=info msg="RemovePodSandbox \"3910d31ccaffaad548a1d7c493b1a4ac5d802f9ce74de089e74fe5f27cced12c\" returns successfully" Jan 13 20:10:59.898833 containerd[1929]: time="2025-01-13T20:10:59.898781851Z" level=info msg="StopPodSandbox for \"735fc36f32f4bb5da5b6a8a886c26db0a876a36a5dbf81151f0c18e78921f630\"" Jan 13 20:10:59.899228 containerd[1929]: time="2025-01-13T20:10:59.899074029Z" level=info msg="TearDown network for sandbox \"735fc36f32f4bb5da5b6a8a886c26db0a876a36a5dbf81151f0c18e78921f630\" successfully" Jan 13 20:10:59.899228 containerd[1929]: time="2025-01-13T20:10:59.899102291Z" level=info msg="StopPodSandbox for \"735fc36f32f4bb5da5b6a8a886c26db0a876a36a5dbf81151f0c18e78921f630\" returns successfully" Jan 13 20:10:59.899606 containerd[1929]: time="2025-01-13T20:10:59.899565063Z" level=info msg="RemovePodSandbox for \"735fc36f32f4bb5da5b6a8a886c26db0a876a36a5dbf81151f0c18e78921f630\"" Jan 13 20:10:59.899681 containerd[1929]: time="2025-01-13T20:10:59.899611310Z" level=info msg="Forcibly stopping sandbox \"735fc36f32f4bb5da5b6a8a886c26db0a876a36a5dbf81151f0c18e78921f630\"" Jan 13 20:10:59.899758 containerd[1929]: time="2025-01-13T20:10:59.899731250Z" level=info msg="TearDown network for sandbox \"735fc36f32f4bb5da5b6a8a886c26db0a876a36a5dbf81151f0c18e78921f630\" successfully" Jan 13 20:10:59.904471 containerd[1929]: time="2025-01-13T20:10:59.904184299Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"735fc36f32f4bb5da5b6a8a886c26db0a876a36a5dbf81151f0c18e78921f630\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:10:59.904471 containerd[1929]: time="2025-01-13T20:10:59.904286542Z" level=info msg="RemovePodSandbox \"735fc36f32f4bb5da5b6a8a886c26db0a876a36a5dbf81151f0c18e78921f630\" returns successfully" Jan 13 20:11:00.739500 kubelet[2399]: E0113 20:11:00.739441 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:01.739709 kubelet[2399]: E0113 20:11:01.739643 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:02.740049 kubelet[2399]: E0113 20:11:02.739990 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:03.740801 kubelet[2399]: E0113 20:11:03.740742 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:04.741075 kubelet[2399]: E0113 20:11:04.741017 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:05.741580 kubelet[2399]: E0113 20:11:05.741521 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:06.742657 kubelet[2399]: E0113 20:11:06.742600 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:07.743036 kubelet[2399]: E0113 20:11:07.742975 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:08.743735 kubelet[2399]: E0113 20:11:08.743668 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:09.743896 kubelet[2399]: E0113 20:11:09.743839 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:10.744250 kubelet[2399]: E0113 20:11:10.744184 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:11.745082 kubelet[2399]: E0113 20:11:11.745019 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:11.904647 kubelet[2399]: I0113 20:11:11.904585 2399 topology_manager.go:215] "Topology Admit Handler" podUID="1fa60a86-ffba-4df9-8ae7-4da52fed33b6" podNamespace="default" podName="test-pod-1" Jan 13 20:11:11.916910 systemd[1]: Created slice kubepods-besteffort-pod1fa60a86_ffba_4df9_8ae7_4da52fed33b6.slice - libcontainer container kubepods-besteffort-pod1fa60a86_ffba_4df9_8ae7_4da52fed33b6.slice. Jan 13 20:11:11.998780 kubelet[2399]: I0113 20:11:11.998317 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a6211721-bf7f-421e-b0ed-d0ab1026d39a\" (UniqueName: \"kubernetes.io/nfs/1fa60a86-ffba-4df9-8ae7-4da52fed33b6-pvc-a6211721-bf7f-421e-b0ed-d0ab1026d39a\") pod \"test-pod-1\" (UID: \"1fa60a86-ffba-4df9-8ae7-4da52fed33b6\") " pod="default/test-pod-1" Jan 13 20:11:11.998780 kubelet[2399]: I0113 20:11:11.998377 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvcwr\" (UniqueName: \"kubernetes.io/projected/1fa60a86-ffba-4df9-8ae7-4da52fed33b6-kube-api-access-gvcwr\") pod \"test-pod-1\" (UID: \"1fa60a86-ffba-4df9-8ae7-4da52fed33b6\") " pod="default/test-pod-1" Jan 13 20:11:12.136271 kernel: FS-Cache: Loaded Jan 13 20:11:12.178762 kernel: RPC: Registered named UNIX socket transport module. Jan 13 20:11:12.178867 kernel: RPC: Registered udp transport module. Jan 13 20:11:12.178931 kernel: RPC: Registered tcp transport module. Jan 13 20:11:12.180806 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 20:11:12.180875 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 20:11:12.486732 kernel: NFS: Registering the id_resolver key type Jan 13 20:11:12.486989 kernel: Key type id_resolver registered Jan 13 20:11:12.487161 kernel: Key type id_legacy registered Jan 13 20:11:12.523945 nfsidmap[4484]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 13 20:11:12.529531 nfsidmap[4485]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 13 20:11:12.746361 kubelet[2399]: E0113 20:11:12.746204 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:12.823459 containerd[1929]: time="2025-01-13T20:11:12.823288269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1fa60a86-ffba-4df9-8ae7-4da52fed33b6,Namespace:default,Attempt:0,}" Jan 13 20:11:13.022813 systemd-networkd[1850]: cali5ec59c6bf6e: Link UP Jan 13 20:11:13.024861 systemd-networkd[1850]: cali5ec59c6bf6e: Gained carrier Jan 13 20:11:13.025657 (udev-worker)[4472]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:11:13.046282 containerd[1929]: 2025-01-13 20:11:12.900 [INFO][4486] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.26.215-k8s-test--pod--1-eth0 default 1fa60a86-ffba-4df9-8ae7-4da52fed33b6 1299 0 2025-01-13 20:10:41 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.26.215 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.215-k8s-test--pod--1-" Jan 13 20:11:13.046282 containerd[1929]: 2025-01-13 20:11:12.900 [INFO][4486] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.215-k8s-test--pod--1-eth0" Jan 13 20:11:13.046282 containerd[1929]: 2025-01-13 20:11:12.946 [INFO][4497] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2" HandleID="k8s-pod-network.1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2" Workload="172.31.26.215-k8s-test--pod--1-eth0" Jan 13 20:11:13.046282 containerd[1929]: 2025-01-13 20:11:12.964 [INFO][4497] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2" HandleID="k8s-pod-network.1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2" Workload="172.31.26.215-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000220830), Attrs:map[string]string{"namespace":"default", "node":"172.31.26.215", "pod":"test-pod-1", "timestamp":"2025-01-13 20:11:12.946069898 +0000 UTC"}, Hostname:"172.31.26.215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:11:13.046282 containerd[1929]: 2025-01-13 20:11:12.964 [INFO][4497] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:11:13.046282 containerd[1929]: 2025-01-13 20:11:12.964 [INFO][4497] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:11:13.046282 containerd[1929]: 2025-01-13 20:11:12.965 [INFO][4497] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.26.215' Jan 13 20:11:13.046282 containerd[1929]: 2025-01-13 20:11:12.968 [INFO][4497] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2" host="172.31.26.215" Jan 13 20:11:13.046282 containerd[1929]: 2025-01-13 20:11:12.975 [INFO][4497] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.26.215" Jan 13 20:11:13.046282 containerd[1929]: 2025-01-13 20:11:12.982 [INFO][4497] ipam/ipam.go 489: Trying affinity for 192.168.61.192/26 host="172.31.26.215" Jan 13 20:11:13.046282 containerd[1929]: 2025-01-13 20:11:12.985 [INFO][4497] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.192/26 host="172.31.26.215" Jan 13 20:11:13.046282 containerd[1929]: 2025-01-13 20:11:12.989 [INFO][4497] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.192/26 host="172.31.26.215" Jan 13 20:11:13.046282 containerd[1929]: 2025-01-13 20:11:12.989 [INFO][4497] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.192/26 handle="k8s-pod-network.1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2" host="172.31.26.215" Jan 13 20:11:13.046282 containerd[1929]: 2025-01-13 20:11:12.992 [INFO][4497] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2 Jan 13 20:11:13.046282 containerd[1929]: 2025-01-13 20:11:12.998 [INFO][4497] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.192/26 handle="k8s-pod-network.1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2" host="172.31.26.215" Jan 13 20:11:13.046282 containerd[1929]: 2025-01-13 20:11:13.015 [INFO][4497] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.196/26] block=192.168.61.192/26 handle="k8s-pod-network.1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2" host="172.31.26.215" Jan 13 20:11:13.046282 containerd[1929]: 2025-01-13 20:11:13.015 [INFO][4497] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.196/26] handle="k8s-pod-network.1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2" host="172.31.26.215" Jan 13 20:11:13.046282 containerd[1929]: 2025-01-13 20:11:13.015 [INFO][4497] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:11:13.046282 containerd[1929]: 2025-01-13 20:11:13.015 [INFO][4497] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.196/26] IPv6=[] ContainerID="1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2" HandleID="k8s-pod-network.1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2" Workload="172.31.26.215-k8s-test--pod--1-eth0" Jan 13 20:11:13.046282 containerd[1929]: 2025-01-13 20:11:13.017 [INFO][4486] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.215-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.215-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"1fa60a86-ffba-4df9-8ae7-4da52fed33b6", ResourceVersion:"1299", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.215", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.61.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:11:13.048456 containerd[1929]: 2025-01-13 20:11:13.018 [INFO][4486] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.196/32] ContainerID="1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.215-k8s-test--pod--1-eth0" Jan 13 20:11:13.048456 containerd[1929]: 2025-01-13 20:11:13.018 [INFO][4486] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.215-k8s-test--pod--1-eth0" Jan 13 20:11:13.048456 containerd[1929]: 2025-01-13 20:11:13.023 [INFO][4486] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.215-k8s-test--pod--1-eth0" Jan 13 20:11:13.048456 containerd[1929]: 2025-01-13 20:11:13.025 [INFO][4486] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.215-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.215-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"1fa60a86-ffba-4df9-8ae7-4da52fed33b6", ResourceVersion:"1299", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.215", ContainerID:"1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.61.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"d6:61:0f:30:26:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:11:13.048456 containerd[1929]: 2025-01-13 20:11:13.040 [INFO][4486] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.215-k8s-test--pod--1-eth0" Jan 13 20:11:13.082617 containerd[1929]: time="2025-01-13T20:11:13.082156768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:11:13.082617 containerd[1929]: time="2025-01-13T20:11:13.082283143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:11:13.082617 containerd[1929]: time="2025-01-13T20:11:13.082312414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:11:13.082617 containerd[1929]: time="2025-01-13T20:11:13.082460928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:11:13.112501 systemd[1]: Started cri-containerd-1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2.scope - libcontainer container 1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2. Jan 13 20:11:13.184394 containerd[1929]: time="2025-01-13T20:11:13.184302881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1fa60a86-ffba-4df9-8ae7-4da52fed33b6,Namespace:default,Attempt:0,} returns sandbox id \"1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2\"" Jan 13 20:11:13.187877 containerd[1929]: time="2025-01-13T20:11:13.187805799Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:11:13.577138 containerd[1929]: time="2025-01-13T20:11:13.576836820Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:11:13.577856 containerd[1929]: time="2025-01-13T20:11:13.577779735Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 20:11:13.584384 containerd[1929]: time="2025-01-13T20:11:13.584290817Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 396.4326ms" Jan 13 20:11:13.584384 containerd[1929]: time="2025-01-13T20:11:13.584375520Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\"" Jan 13 20:11:13.588124 containerd[1929]: time="2025-01-13T20:11:13.587753756Z" level=info msg="CreateContainer within sandbox \"1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 20:11:13.612647 containerd[1929]: time="2025-01-13T20:11:13.612511065Z" level=info msg="CreateContainer within sandbox \"1d925fa26d94bff3448210bc60b2c4306f008b94b28cb3487d285eed591d87d2\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"563245a329d5c86bd01de2507025a91ee363de0751ea057bf6e539a42fb3275a\"" Jan 13 20:11:13.613883 containerd[1929]: time="2025-01-13T20:11:13.613363467Z" level=info msg="StartContainer for \"563245a329d5c86bd01de2507025a91ee363de0751ea057bf6e539a42fb3275a\"" Jan 13 20:11:13.669514 systemd[1]: Started cri-containerd-563245a329d5c86bd01de2507025a91ee363de0751ea057bf6e539a42fb3275a.scope - libcontainer container 563245a329d5c86bd01de2507025a91ee363de0751ea057bf6e539a42fb3275a. Jan 13 20:11:13.718089 containerd[1929]: time="2025-01-13T20:11:13.717695919Z" level=info msg="StartContainer for \"563245a329d5c86bd01de2507025a91ee363de0751ea057bf6e539a42fb3275a\" returns successfully" Jan 13 20:11:13.747075 kubelet[2399]: E0113 20:11:13.746977 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:14.116823 systemd[1]: run-containerd-runc-k8s.io-563245a329d5c86bd01de2507025a91ee363de0751ea057bf6e539a42fb3275a-runc.yYmCkA.mount: Deactivated successfully. Jan 13 20:11:14.704959 systemd-networkd[1850]: cali5ec59c6bf6e: Gained IPv6LL Jan 13 20:11:14.747686 kubelet[2399]: E0113 20:11:14.747609 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:15.748475 kubelet[2399]: E0113 20:11:15.748347 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:16.748942 kubelet[2399]: E0113 20:11:16.748868 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:17.128279 ntpd[1916]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 13 20:11:17.128990 ntpd[1916]: 13 Jan 20:11:17 ntpd[1916]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 13 20:11:17.749920 kubelet[2399]: E0113 20:11:17.749861 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:18.750302 kubelet[2399]: E0113 20:11:18.750237 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:19.700491 kubelet[2399]: E0113 20:11:19.700197 2399 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:19.750724 kubelet[2399]: E0113 20:11:19.750654 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:20.750959 kubelet[2399]: E0113 20:11:20.750893 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:21.751482 kubelet[2399]: E0113 20:11:21.751417 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:22.752034 kubelet[2399]: E0113 20:11:22.751961 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:23.753095 kubelet[2399]: E0113 20:11:23.753038 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:24.754906 kubelet[2399]: E0113 20:11:24.754841 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:25.755201 kubelet[2399]: E0113 20:11:25.755127 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:26.755929 kubelet[2399]: E0113 20:11:26.755870 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:27.756338 kubelet[2399]: E0113 20:11:27.756274 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:28.756823 kubelet[2399]: E0113 20:11:28.756750 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:29.757364 kubelet[2399]: E0113 20:11:29.757313 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:30.757968 kubelet[2399]: E0113 20:11:30.757911 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:31.758478 kubelet[2399]: E0113 20:11:31.758408 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:32.758631 kubelet[2399]: E0113 20:11:32.758573 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:33.758823 kubelet[2399]: E0113 20:11:33.758751 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:34.759547 kubelet[2399]: E0113 20:11:34.759489 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:35.760636 kubelet[2399]: E0113 20:11:35.760551 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:36.760899 kubelet[2399]: E0113 20:11:36.760841 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:37.761691 kubelet[2399]: E0113 20:11:37.761622 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:38.762383 kubelet[2399]: E0113 20:11:38.762320 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:39.700856 kubelet[2399]: E0113 20:11:39.700791 2399 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:39.763448 kubelet[2399]: E0113 20:11:39.763383 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:40.763697 kubelet[2399]: E0113 20:11:40.763637 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:41.764400 kubelet[2399]: E0113 20:11:41.764335 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:41.942199 kubelet[2399]: E0113 20:11:41.942118 2399 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.215?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 13 20:11:42.507537 kubelet[2399]: E0113 20:11:42.507256 2399 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.26.215\": Get \"https://172.31.25.252:6443/api/v1/nodes/172.31.26.215?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 13 20:11:42.764637 kubelet[2399]: E0113 20:11:42.764484 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:43.765142 kubelet[2399]: E0113 20:11:43.765080 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:44.766075 kubelet[2399]: E0113 20:11:44.766016 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:45.767249 kubelet[2399]: E0113 20:11:45.767163 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:46.768058 kubelet[2399]: E0113 20:11:46.767989 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:47.768940 kubelet[2399]: E0113 20:11:47.768879 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:48.769091 kubelet[2399]: E0113 20:11:48.769038 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:11:49.770224 kubelet[2399]: E0113 20:11:49.770165 2399 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"