Feb 13 19:02:39.163862 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 19:02:39.163907 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:46:24 -00 2025 Feb 13 19:02:39.163931 kernel: KASLR disabled due to lack of seed Feb 13 19:02:39.163947 kernel: efi: EFI v2.7 by EDK II Feb 13 19:02:39.163963 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Feb 13 19:02:39.163979 kernel: secureboot: Secure boot disabled Feb 13 19:02:39.163996 kernel: ACPI: Early table checksum verification disabled Feb 13 19:02:39.164011 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 19:02:39.164026 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 19:02:39.164042 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:02:39.164061 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 19:02:39.164077 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:02:39.164093 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 19:02:39.164109 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 19:02:39.164127 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 19:02:39.164148 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:02:39.164165 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 19:02:39.164182 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 19:02:39.164198 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 19:02:39.164214 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 19:02:39.166301 kernel: printk: bootconsole [uart0] enabled Feb 13 19:02:39.166321 kernel: NUMA: Failed to initialise from firmware Feb 13 19:02:39.166338 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:02:39.166356 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 19:02:39.166394 kernel: Zone ranges: Feb 13 19:02:39.166412 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:02:39.166438 kernel: DMA32 empty Feb 13 19:02:39.166455 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 19:02:39.166470 kernel: Movable zone start for each node Feb 13 19:02:39.166486 kernel: Early memory node ranges Feb 13 19:02:39.166502 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 19:02:39.166518 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 19:02:39.166534 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 19:02:39.166549 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 19:02:39.166565 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 19:02:39.166581 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 19:02:39.166596 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 19:02:39.166612 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 19:02:39.166632 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:02:39.166649 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 19:02:39.166672 kernel: psci: probing for conduit method from ACPI. Feb 13 19:02:39.166689 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 19:02:39.166706 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:02:39.166726 kernel: psci: Trusted OS migration not required Feb 13 19:02:39.166744 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:02:39.166760 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:02:39.166777 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:02:39.166795 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:02:39.166811 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:02:39.166828 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:02:39.166845 kernel: CPU features: detected: Spectre-v2 Feb 13 19:02:39.166862 kernel: CPU features: detected: Spectre-v3a Feb 13 19:02:39.166879 kernel: CPU features: detected: Spectre-BHB Feb 13 19:02:39.166895 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 19:02:39.166912 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 19:02:39.166933 kernel: alternatives: applying boot alternatives Feb 13 19:02:39.166952 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:02:39.166971 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:02:39.166988 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:02:39.167005 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:02:39.167022 kernel: Fallback order for Node 0: 0 Feb 13 19:02:39.167038 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 19:02:39.167055 kernel: Policy zone: Normal Feb 13 19:02:39.167072 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:02:39.167088 kernel: software IO TLB: area num 2. Feb 13 19:02:39.167110 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 19:02:39.167128 kernel: Memory: 3819960K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 210504K reserved, 0K cma-reserved) Feb 13 19:02:39.167145 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:02:39.167162 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:02:39.167180 kernel: rcu: RCU event tracing is enabled. Feb 13 19:02:39.167198 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:02:39.167215 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:02:39.167286 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:02:39.167309 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:02:39.167326 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:02:39.167343 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:02:39.167366 kernel: GICv3: 96 SPIs implemented Feb 13 19:02:39.167383 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:02:39.167400 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:02:39.167417 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 19:02:39.167433 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 19:02:39.167450 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 19:02:39.167467 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:02:39.167484 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:02:39.167502 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 19:02:39.167518 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 19:02:39.167536 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 19:02:39.167552 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:02:39.167574 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 19:02:39.167591 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 19:02:39.167608 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 19:02:39.167625 kernel: Console: colour dummy device 80x25 Feb 13 19:02:39.167644 kernel: printk: console [tty1] enabled Feb 13 19:02:39.167661 kernel: ACPI: Core revision 20230628 Feb 13 19:02:39.167679 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 19:02:39.167697 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:02:39.167714 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:02:39.167732 kernel: landlock: Up and running. Feb 13 19:02:39.167754 kernel: SELinux: Initializing. Feb 13 19:02:39.167771 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:02:39.167789 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:02:39.167807 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:02:39.167824 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:02:39.167842 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:02:39.167860 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:02:39.167878 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 19:02:39.167901 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 19:02:39.167920 kernel: Remapping and enabling EFI services. Feb 13 19:02:39.167938 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:02:39.167956 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:02:39.167974 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 19:02:39.167992 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 19:02:39.168010 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 19:02:39.168028 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:02:39.168047 kernel: SMP: Total of 2 processors activated. Feb 13 19:02:39.168064 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:02:39.168088 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 19:02:39.168107 kernel: CPU features: detected: CRC32 instructions Feb 13 19:02:39.168138 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:02:39.168162 kernel: alternatives: applying system-wide alternatives Feb 13 19:02:39.168180 kernel: devtmpfs: initialized Feb 13 19:02:39.168199 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:02:39.169319 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:02:39.169378 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:02:39.169399 kernel: SMBIOS 3.0.0 present. Feb 13 19:02:39.169429 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 19:02:39.169449 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:02:39.169470 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:02:39.169490 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:02:39.169509 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:02:39.169528 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:02:39.169547 kernel: audit: type=2000 audit(0.220:1): state=initialized audit_enabled=0 res=1 Feb 13 19:02:39.169579 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:02:39.169599 kernel: cpuidle: using governor menu Feb 13 19:02:39.169618 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:02:39.169637 kernel: ASID allocator initialised with 65536 entries Feb 13 19:02:39.169656 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:02:39.169674 kernel: Serial: AMBA PL011 UART driver Feb 13 19:02:39.169693 kernel: Modules: 17440 pages in range for non-PLT usage Feb 13 19:02:39.169711 kernel: Modules: 508960 pages in range for PLT usage Feb 13 19:02:39.169730 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:02:39.169752 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:02:39.169771 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:02:39.169790 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:02:39.169810 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:02:39.169829 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:02:39.169847 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:02:39.169866 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:02:39.169884 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:02:39.169904 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:02:39.169928 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:02:39.169947 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:02:39.169966 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:02:39.169985 kernel: ACPI: Interpreter enabled Feb 13 19:02:39.170022 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:02:39.170042 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:02:39.170061 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 19:02:39.170449 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:02:39.170680 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:02:39.170880 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:02:39.171078 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 19:02:39.172485 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 19:02:39.172530 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 19:02:39.172550 kernel: acpiphp: Slot [1] registered Feb 13 19:02:39.172569 kernel: acpiphp: Slot [2] registered Feb 13 19:02:39.172588 kernel: acpiphp: Slot [3] registered Feb 13 19:02:39.172616 kernel: acpiphp: Slot [4] registered Feb 13 19:02:39.172635 kernel: acpiphp: Slot [5] registered Feb 13 19:02:39.172653 kernel: acpiphp: Slot [6] registered Feb 13 19:02:39.172670 kernel: acpiphp: Slot [7] registered Feb 13 19:02:39.172688 kernel: acpiphp: Slot [8] registered Feb 13 19:02:39.172706 kernel: acpiphp: Slot [9] registered Feb 13 19:02:39.172724 kernel: acpiphp: Slot [10] registered Feb 13 19:02:39.172742 kernel: acpiphp: Slot [11] registered Feb 13 19:02:39.172760 kernel: acpiphp: Slot [12] registered Feb 13 19:02:39.172778 kernel: acpiphp: Slot [13] registered Feb 13 19:02:39.172800 kernel: acpiphp: Slot [14] registered Feb 13 19:02:39.172818 kernel: acpiphp: Slot [15] registered Feb 13 19:02:39.172836 kernel: acpiphp: Slot [16] registered Feb 13 19:02:39.172854 kernel: acpiphp: Slot [17] registered Feb 13 19:02:39.172872 kernel: acpiphp: Slot [18] registered Feb 13 19:02:39.172890 kernel: acpiphp: Slot [19] registered Feb 13 19:02:39.172908 kernel: acpiphp: Slot [20] registered Feb 13 19:02:39.172925 kernel: acpiphp: Slot [21] registered Feb 13 19:02:39.172943 kernel: acpiphp: Slot [22] registered Feb 13 19:02:39.172965 kernel: acpiphp: Slot [23] registered Feb 13 19:02:39.172983 kernel: acpiphp: Slot [24] registered Feb 13 19:02:39.173001 kernel: acpiphp: Slot [25] registered Feb 13 19:02:39.173019 kernel: acpiphp: Slot [26] registered Feb 13 19:02:39.173037 kernel: acpiphp: Slot [27] registered Feb 13 19:02:39.173054 kernel: acpiphp: Slot [28] registered Feb 13 19:02:39.173072 kernel: acpiphp: Slot [29] registered Feb 13 19:02:39.173090 kernel: acpiphp: Slot [30] registered Feb 13 19:02:39.173108 kernel: acpiphp: Slot [31] registered Feb 13 19:02:39.173126 kernel: PCI host bridge to bus 0000:00 Feb 13 19:02:39.173390 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 19:02:39.173574 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:02:39.173753 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 19:02:39.173932 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 19:02:39.174157 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 19:02:39.175588 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 19:02:39.175850 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 19:02:39.176090 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:02:39.176377 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 19:02:39.176602 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:02:39.176854 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:02:39.177069 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 19:02:39.194863 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 19:02:39.195132 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 19:02:39.195369 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:02:39.195577 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 19:02:39.195783 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 19:02:39.195990 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 19:02:39.196194 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 19:02:39.196427 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 19:02:39.196627 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 19:02:39.196811 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:02:39.197010 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 19:02:39.197037 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:02:39.197056 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:02:39.197075 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:02:39.197093 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:02:39.197111 kernel: iommu: Default domain type: Translated Feb 13 19:02:39.197136 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:02:39.197154 kernel: efivars: Registered efivars operations Feb 13 19:02:39.197172 kernel: vgaarb: loaded Feb 13 19:02:39.197190 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:02:39.197208 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:02:39.199341 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:02:39.199374 kernel: pnp: PnP ACPI init Feb 13 19:02:39.199633 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 19:02:39.199669 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:02:39.199688 kernel: NET: Registered PF_INET protocol family Feb 13 19:02:39.199707 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:02:39.199725 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:02:39.199744 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:02:39.199762 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:02:39.199781 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:02:39.199799 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:02:39.199817 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:02:39.199840 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:02:39.199859 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:02:39.199877 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:02:39.199895 kernel: kvm [1]: HYP mode not available Feb 13 19:02:39.199912 kernel: Initialise system trusted keyrings Feb 13 19:02:39.199931 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:02:39.199949 kernel: Key type asymmetric registered Feb 13 19:02:39.199967 kernel: Asymmetric key parser 'x509' registered Feb 13 19:02:39.199985 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:02:39.200007 kernel: io scheduler mq-deadline registered Feb 13 19:02:39.200026 kernel: io scheduler kyber registered Feb 13 19:02:39.200044 kernel: io scheduler bfq registered Feb 13 19:02:39.200321 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 19:02:39.200350 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:02:39.200369 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:02:39.200387 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 19:02:39.200405 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 19:02:39.200430 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:02:39.200449 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:02:39.200659 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 19:02:39.200685 kernel: printk: console [ttyS0] disabled Feb 13 19:02:39.200703 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 19:02:39.200721 kernel: printk: console [ttyS0] enabled Feb 13 19:02:39.200739 kernel: printk: bootconsole [uart0] disabled Feb 13 19:02:39.200757 kernel: thunder_xcv, ver 1.0 Feb 13 19:02:39.200775 kernel: thunder_bgx, ver 1.0 Feb 13 19:02:39.200793 kernel: nicpf, ver 1.0 Feb 13 19:02:39.200816 kernel: nicvf, ver 1.0 Feb 13 19:02:39.201018 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:02:39.201213 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:02:38 UTC (1739473358) Feb 13 19:02:39.201399 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:02:39.201418 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 19:02:39.201437 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:02:39.201455 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:02:39.201479 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:02:39.201498 kernel: Segment Routing with IPv6 Feb 13 19:02:39.201516 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:02:39.201534 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:02:39.201552 kernel: Key type dns_resolver registered Feb 13 19:02:39.201570 kernel: registered taskstats version 1 Feb 13 19:02:39.201588 kernel: Loading compiled-in X.509 certificates Feb 13 19:02:39.201606 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 916055ad16f0ba578cce640a9ac58627fd43c936' Feb 13 19:02:39.201624 kernel: Key type .fscrypt registered Feb 13 19:02:39.201642 kernel: Key type fscrypt-provisioning registered Feb 13 19:02:39.201664 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:02:39.201683 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:02:39.201701 kernel: ima: No architecture policies found Feb 13 19:02:39.201719 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:02:39.201737 kernel: clk: Disabling unused clocks Feb 13 19:02:39.201755 kernel: Freeing unused kernel memory: 39680K Feb 13 19:02:39.201773 kernel: Run /init as init process Feb 13 19:02:39.201791 kernel: with arguments: Feb 13 19:02:39.201809 kernel: /init Feb 13 19:02:39.201831 kernel: with environment: Feb 13 19:02:39.201848 kernel: HOME=/ Feb 13 19:02:39.201867 kernel: TERM=linux Feb 13 19:02:39.201884 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:02:39.201907 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:02:39.201930 systemd[1]: Detected virtualization amazon. Feb 13 19:02:39.201951 systemd[1]: Detected architecture arm64. Feb 13 19:02:39.201974 systemd[1]: Running in initrd. Feb 13 19:02:39.202006 systemd[1]: No hostname configured, using default hostname. Feb 13 19:02:39.202030 systemd[1]: Hostname set to . Feb 13 19:02:39.202051 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:02:39.202071 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:02:39.202091 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:02:39.202110 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:02:39.202132 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:02:39.202158 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:02:39.202179 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:02:39.202200 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:02:39.202245 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:02:39.202270 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:02:39.202291 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:02:39.202310 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:02:39.202337 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:02:39.202357 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:02:39.202396 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:02:39.202416 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:02:39.202436 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:02:39.202456 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:02:39.202476 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:02:39.202496 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:02:39.202516 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:02:39.202541 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:02:39.202561 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:02:39.202581 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:02:39.202601 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:02:39.202620 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:02:39.202640 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:02:39.202660 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:02:39.202680 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:02:39.202704 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:02:39.202725 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:02:39.202745 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:02:39.202765 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:02:39.202784 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:02:39.202841 systemd-journald[252]: Collecting audit messages is disabled. Feb 13 19:02:39.202889 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:02:39.202909 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:02:39.202929 systemd-journald[252]: Journal started Feb 13 19:02:39.202971 systemd-journald[252]: Runtime Journal (/run/log/journal/ec25b43826db3750a21fde5b82e58b40) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:02:39.171291 systemd-modules-load[253]: Inserted module 'overlay' Feb 13 19:02:39.211841 kernel: Bridge firewalling registered Feb 13 19:02:39.211879 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:02:39.204434 systemd-modules-load[253]: Inserted module 'br_netfilter' Feb 13 19:02:39.217931 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:02:39.220703 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:39.224648 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:02:39.244770 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:02:39.250499 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:02:39.251653 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:02:39.283620 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:02:39.304095 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:02:39.314717 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:02:39.326174 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:02:39.330957 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:02:39.344685 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:02:39.369020 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:02:39.392524 dracut-cmdline[287]: dracut-dracut-053 Feb 13 19:02:39.400257 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:02:39.447336 systemd-resolved[290]: Positive Trust Anchors: Feb 13 19:02:39.447390 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:02:39.447467 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:02:39.559261 kernel: SCSI subsystem initialized Feb 13 19:02:39.568243 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:02:39.579263 kernel: iscsi: registered transport (tcp) Feb 13 19:02:39.601256 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:02:39.601329 kernel: QLogic iSCSI HBA Driver Feb 13 19:02:39.659255 kernel: random: crng init done Feb 13 19:02:39.659562 systemd-resolved[290]: Defaulting to hostname 'linux'. Feb 13 19:02:39.662940 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:02:39.665427 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:02:39.690430 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:02:39.699576 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:02:39.741577 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:02:39.741664 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:02:39.743294 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:02:39.824252 kernel: raid6: neonx8 gen() 6719 MB/s Feb 13 19:02:39.826262 kernel: raid6: neonx4 gen() 6520 MB/s Feb 13 19:02:39.842252 kernel: raid6: neonx2 gen() 5450 MB/s Feb 13 19:02:39.859252 kernel: raid6: neonx1 gen() 3936 MB/s Feb 13 19:02:39.876252 kernel: raid6: int64x8 gen() 3801 MB/s Feb 13 19:02:39.893251 kernel: raid6: int64x4 gen() 3717 MB/s Feb 13 19:02:39.910252 kernel: raid6: int64x2 gen() 3597 MB/s Feb 13 19:02:39.927999 kernel: raid6: int64x1 gen() 2768 MB/s Feb 13 19:02:39.928030 kernel: raid6: using algorithm neonx8 gen() 6719 MB/s Feb 13 19:02:39.945981 kernel: raid6: .... xor() 4913 MB/s, rmw enabled Feb 13 19:02:39.946016 kernel: raid6: using neon recovery algorithm Feb 13 19:02:39.953256 kernel: xor: measuring software checksum speed Feb 13 19:02:39.955393 kernel: 8regs : 10248 MB/sec Feb 13 19:02:39.955429 kernel: 32regs : 11438 MB/sec Feb 13 19:02:39.956559 kernel: arm64_neon : 9517 MB/sec Feb 13 19:02:39.956599 kernel: xor: using function: 32regs (11438 MB/sec) Feb 13 19:02:40.040276 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:02:40.058921 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:02:40.067484 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:02:40.109258 systemd-udevd[472]: Using default interface naming scheme 'v255'. Feb 13 19:02:40.117942 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:02:40.135214 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:02:40.177961 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Feb 13 19:02:40.233328 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:02:40.241527 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:02:40.364623 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:02:40.376761 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:02:40.418018 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:02:40.422603 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:02:40.424872 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:02:40.427081 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:02:40.445918 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:02:40.471330 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:02:40.563320 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:02:40.563404 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 19:02:40.587530 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:02:40.587790 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:02:40.588020 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:4f:e3:35:87:79 Feb 13 19:02:40.574181 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:02:40.574575 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:02:40.577704 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:02:40.579885 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:02:40.580262 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:40.612344 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:02:40.612383 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:02:40.582576 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:02:40.596674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:02:40.597788 (udev-worker)[519]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:02:40.630500 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:02:40.639345 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:02:40.639414 kernel: GPT:9289727 != 16777215 Feb 13 19:02:40.640516 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:02:40.642205 kernel: GPT:9289727 != 16777215 Feb 13 19:02:40.642251 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:02:40.643266 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:02:40.647495 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:40.659498 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:02:40.695988 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:02:40.761625 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (523) Feb 13 19:02:40.780294 kernel: BTRFS: device fsid 44fbcf53-fa5f-4fd4-b434-f067731b9a44 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (528) Feb 13 19:02:40.831481 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:02:40.854124 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:02:40.896011 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:02:40.909807 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:02:40.912546 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:02:40.933575 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:02:40.945828 disk-uuid[662]: Primary Header is updated. Feb 13 19:02:40.945828 disk-uuid[662]: Secondary Entries is updated. Feb 13 19:02:40.945828 disk-uuid[662]: Secondary Header is updated. Feb 13 19:02:40.958268 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:02:40.966270 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:02:41.970373 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:02:41.972253 disk-uuid[664]: The operation has completed successfully. Feb 13 19:02:42.150891 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:02:42.151113 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:02:42.205590 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:02:42.213295 sh[922]: Success Feb 13 19:02:42.231448 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:02:42.327910 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:02:42.339462 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:02:42.351882 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:02:42.383406 kernel: BTRFS info (device dm-0): first mount of filesystem 44fbcf53-fa5f-4fd4-b434-f067731b9a44 Feb 13 19:02:42.383480 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:02:42.383506 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:02:42.385037 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:02:42.386266 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:02:42.516265 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:02:42.549377 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:02:42.553211 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:02:42.564472 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:02:42.571522 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:02:42.600236 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:02:42.600325 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:02:42.600357 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:02:42.608262 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:02:42.624342 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:02:42.627188 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:02:42.637445 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:02:42.647838 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:02:42.751653 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:02:42.762609 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:02:42.814797 systemd-networkd[1114]: lo: Link UP Feb 13 19:02:42.814820 systemd-networkd[1114]: lo: Gained carrier Feb 13 19:02:42.818725 systemd-networkd[1114]: Enumeration completed Feb 13 19:02:42.819652 systemd-networkd[1114]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:02:42.819659 systemd-networkd[1114]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:02:42.820767 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:02:42.827985 systemd-networkd[1114]: eth0: Link UP Feb 13 19:02:42.827996 systemd-networkd[1114]: eth0: Gained carrier Feb 13 19:02:42.828015 systemd-networkd[1114]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:02:42.842442 systemd[1]: Reached target network.target - Network. Feb 13 19:02:42.865293 systemd-networkd[1114]: eth0: DHCPv4 address 172.31.23.196/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:02:42.994793 ignition[1026]: Ignition 2.20.0 Feb 13 19:02:42.994815 ignition[1026]: Stage: fetch-offline Feb 13 19:02:42.995749 ignition[1026]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:42.995777 ignition[1026]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:02:42.996361 ignition[1026]: Ignition finished successfully Feb 13 19:02:43.006039 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:02:43.020466 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:02:43.043040 ignition[1124]: Ignition 2.20.0 Feb 13 19:02:43.043569 ignition[1124]: Stage: fetch Feb 13 19:02:43.044157 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:43.044189 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:02:43.044412 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:02:43.085378 ignition[1124]: PUT result: OK Feb 13 19:02:43.092285 ignition[1124]: parsed url from cmdline: "" Feb 13 19:02:43.092302 ignition[1124]: no config URL provided Feb 13 19:02:43.092890 ignition[1124]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:02:43.092918 ignition[1124]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:02:43.093133 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:02:43.095968 ignition[1124]: PUT result: OK Feb 13 19:02:43.096118 ignition[1124]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:02:43.103173 ignition[1124]: GET result: OK Feb 13 19:02:43.103301 ignition[1124]: parsing config with SHA512: 4d09674a2ed6d359ea4a53b9bc8201bb0400c104c568f649e5ae3b1c45b73074be61e240adaeda965777c17a56f4be7270f844c1872f74226e52df8319dcc25a Feb 13 19:02:43.109670 unknown[1124]: fetched base config from "system" Feb 13 19:02:43.109693 unknown[1124]: fetched base config from "system" Feb 13 19:02:43.110651 ignition[1124]: fetch: fetch complete Feb 13 19:02:43.109707 unknown[1124]: fetched user config from "aws" Feb 13 19:02:43.110664 ignition[1124]: fetch: fetch passed Feb 13 19:02:43.110755 ignition[1124]: Ignition finished successfully Feb 13 19:02:43.121202 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:02:43.139685 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:02:43.163264 ignition[1130]: Ignition 2.20.0 Feb 13 19:02:43.163756 ignition[1130]: Stage: kargs Feb 13 19:02:43.164373 ignition[1130]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:43.164398 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:02:43.164600 ignition[1130]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:02:43.169325 ignition[1130]: PUT result: OK Feb 13 19:02:43.183901 ignition[1130]: kargs: kargs passed Feb 13 19:02:43.183997 ignition[1130]: Ignition finished successfully Feb 13 19:02:43.189296 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:02:43.205137 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:02:43.226748 ignition[1137]: Ignition 2.20.0 Feb 13 19:02:43.227295 ignition[1137]: Stage: disks Feb 13 19:02:43.227880 ignition[1137]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:43.227905 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:02:43.228100 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:02:43.232384 ignition[1137]: PUT result: OK Feb 13 19:02:43.240598 ignition[1137]: disks: disks passed Feb 13 19:02:43.240914 ignition[1137]: Ignition finished successfully Feb 13 19:02:43.245865 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:02:43.249759 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:02:43.252135 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:02:43.256252 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:02:43.262077 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:02:43.263962 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:02:43.275525 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:02:43.323352 systemd-fsck[1145]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:02:43.328119 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:02:43.343642 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:02:43.425262 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e24df12d-6575-4a90-bef9-33573b9d63e7 r/w with ordered data mode. Quota mode: none. Feb 13 19:02:43.425789 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:02:43.429357 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:02:43.445429 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:02:43.457686 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:02:43.463001 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:02:43.463086 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:02:43.463136 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:02:43.477710 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:02:43.497258 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1164) Feb 13 19:02:43.499899 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:02:43.509409 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:02:43.509451 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:02:43.509477 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:02:43.520239 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:02:43.522915 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:02:43.921365 initrd-setup-root[1188]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:02:43.930096 initrd-setup-root[1195]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:02:43.950186 initrd-setup-root[1202]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:02:43.958396 initrd-setup-root[1209]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:02:44.270360 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:02:44.285901 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:02:44.290712 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:02:44.310603 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:02:44.310393 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:02:44.356707 ignition[1276]: INFO : Ignition 2.20.0 Feb 13 19:02:44.360923 ignition[1276]: INFO : Stage: mount Feb 13 19:02:44.360923 ignition[1276]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:44.360923 ignition[1276]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:02:44.360923 ignition[1276]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:02:44.356792 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:02:44.387419 ignition[1276]: INFO : PUT result: OK Feb 13 19:02:44.387419 ignition[1276]: INFO : mount: mount passed Feb 13 19:02:44.387419 ignition[1276]: INFO : Ignition finished successfully Feb 13 19:02:44.388174 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:02:44.401391 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:02:44.435558 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:02:44.458275 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1289) Feb 13 19:02:44.462003 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:02:44.462052 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:02:44.462089 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:02:44.468261 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:02:44.471492 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:02:44.513551 ignition[1306]: INFO : Ignition 2.20.0 Feb 13 19:02:44.513551 ignition[1306]: INFO : Stage: files Feb 13 19:02:44.517733 ignition[1306]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:44.517733 ignition[1306]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:02:44.517733 ignition[1306]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:02:44.517733 ignition[1306]: INFO : PUT result: OK Feb 13 19:02:44.527654 ignition[1306]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:02:44.531237 ignition[1306]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:02:44.531237 ignition[1306]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:02:44.562466 ignition[1306]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:02:44.565252 ignition[1306]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:02:44.568006 unknown[1306]: wrote ssh authorized keys file for user: core Feb 13 19:02:44.570305 ignition[1306]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:02:44.574085 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:02:44.577306 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:02:44.577306 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:02:44.577306 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:02:44.577306 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:02:44.577306 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:02:44.577306 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:02:44.603532 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 19:02:44.744380 systemd-networkd[1114]: eth0: Gained IPv6LL Feb 13 19:02:44.954984 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:02:45.345809 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:02:45.349684 ignition[1306]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:02:45.349684 ignition[1306]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:02:45.349684 ignition[1306]: INFO : files: files passed Feb 13 19:02:45.349684 ignition[1306]: INFO : Ignition finished successfully Feb 13 19:02:45.357110 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:02:45.382638 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:02:45.391570 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:02:45.396990 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:02:45.398274 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:02:45.433704 initrd-setup-root-after-ignition[1335]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:02:45.433704 initrd-setup-root-after-ignition[1335]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:02:45.440240 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:02:45.446276 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:02:45.451853 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:02:45.461478 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:02:45.532988 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:02:45.533188 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:02:45.537830 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:02:45.540377 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:02:45.542582 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:02:45.558629 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:02:45.585158 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:02:45.609601 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:02:45.637432 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:02:45.637868 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:02:45.645093 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:02:45.647328 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:02:45.649590 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:02:45.651680 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:02:45.651778 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:02:45.658632 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:02:45.667692 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:02:45.669320 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:02:45.671274 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:02:45.673344 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:02:45.675390 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:02:45.677275 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:02:45.681671 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:02:45.694421 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:02:45.696310 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:02:45.697865 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:02:45.697958 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:02:45.700705 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:02:45.704420 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:02:45.706620 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:02:45.713742 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:02:45.722346 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:02:45.722442 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:02:45.724626 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:02:45.724707 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:02:45.727003 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:02:45.727080 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:02:45.745488 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:02:45.751255 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:02:45.751360 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:02:45.757710 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:02:45.768431 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:02:45.768566 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:02:45.770823 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:02:45.770926 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:02:45.791630 ignition[1360]: INFO : Ignition 2.20.0 Feb 13 19:02:45.791630 ignition[1360]: INFO : Stage: umount Feb 13 19:02:45.795423 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:45.795423 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:02:45.795423 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:02:45.804206 ignition[1360]: INFO : PUT result: OK Feb 13 19:02:45.807567 ignition[1360]: INFO : umount: umount passed Feb 13 19:02:45.809398 ignition[1360]: INFO : Ignition finished successfully Feb 13 19:02:45.811150 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:02:45.815826 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:02:45.817641 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:02:45.821720 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:02:45.821819 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:02:45.826958 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:02:45.827045 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:02:45.836548 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:02:45.836631 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:02:45.838715 systemd[1]: Stopped target network.target - Network. Feb 13 19:02:45.841371 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:02:45.841475 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:02:45.843918 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:02:45.846357 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:02:45.850596 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:02:45.853706 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:02:45.856643 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:02:45.856887 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:02:45.856964 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:02:45.857177 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:02:45.857700 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:02:45.858056 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:02:45.858139 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:02:45.858668 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:02:45.858744 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:02:45.859164 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:02:45.859953 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:02:45.904069 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:02:45.906434 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:02:45.908911 systemd-networkd[1114]: eth0: DHCPv6 lease lost Feb 13 19:02:45.917594 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:02:45.917824 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:02:45.927851 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:02:45.927952 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:02:45.944488 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:02:45.947088 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:02:45.947199 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:02:45.955133 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:02:45.955972 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:02:45.961265 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:02:45.961367 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:02:45.963396 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:02:45.963482 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:02:45.968170 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:02:45.989938 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:02:45.990409 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:02:45.999520 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:02:45.999672 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:02:46.016929 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:02:46.017308 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:02:46.020861 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:02:46.020950 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:02:46.026737 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:02:46.026817 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:02:46.029549 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:02:46.029634 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:02:46.032190 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:02:46.032400 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:02:46.036793 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:02:46.036891 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:02:46.060493 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:02:46.063610 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:02:46.063722 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:02:46.066408 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:02:46.066486 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:46.069560 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:02:46.069737 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:02:46.108382 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:02:46.108749 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:02:46.116390 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:02:46.124540 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:02:46.150640 systemd[1]: Switching root. Feb 13 19:02:46.215072 systemd-journald[252]: Journal stopped Feb 13 19:02:48.570734 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Feb 13 19:02:48.570867 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:02:48.570910 kernel: SELinux: policy capability open_perms=1 Feb 13 19:02:48.570946 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:02:48.570976 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:02:48.571006 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:02:48.571036 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:02:48.571066 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:02:48.571103 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:02:48.571131 kernel: audit: type=1403 audit(1739473366.710:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:02:48.571169 systemd[1]: Successfully loaded SELinux policy in 86.321ms. Feb 13 19:02:48.571214 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.982ms. Feb 13 19:02:48.573985 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:02:48.574024 systemd[1]: Detected virtualization amazon. Feb 13 19:02:48.574054 systemd[1]: Detected architecture arm64. Feb 13 19:02:48.574085 systemd[1]: Detected first boot. Feb 13 19:02:48.574117 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:02:48.574149 zram_generator::config[1401]: No configuration found. Feb 13 19:02:48.574182 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:02:48.574213 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:02:48.576400 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:02:48.576441 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:02:48.576474 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:02:48.576507 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:02:48.576540 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:02:48.576574 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:02:48.576606 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:02:48.576637 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:02:48.576671 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:02:48.576701 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:02:48.576734 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:02:48.576765 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:02:48.576794 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:02:48.576824 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:02:48.576869 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:02:48.576901 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:02:48.576933 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:02:48.576964 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:02:48.576993 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:02:48.577024 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:02:48.577056 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:02:48.577087 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:02:48.577121 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:02:48.577151 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:02:48.577179 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:02:48.577209 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:02:48.588515 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:02:48.588570 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:02:48.588601 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:02:48.588631 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:02:48.588662 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:02:48.588698 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:02:48.588734 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:02:48.588767 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:02:48.588795 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:02:48.588826 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:02:48.588856 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:02:48.588885 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:02:48.588915 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:02:48.588944 systemd[1]: Reached target machines.target - Containers. Feb 13 19:02:48.588978 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:02:48.589009 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:02:48.589038 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:02:48.589069 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:02:48.589097 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:02:48.589129 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:02:48.589158 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:02:48.589192 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:02:48.590293 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:02:48.590364 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:02:48.590397 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:02:48.590427 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:02:48.590463 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:02:48.590492 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:02:48.590525 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:02:48.590555 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:02:48.590583 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:02:48.590620 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:02:48.590649 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:02:48.590679 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:02:48.590707 systemd[1]: Stopped verity-setup.service. Feb 13 19:02:48.590736 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:02:48.590764 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:02:48.590793 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:02:48.590821 kernel: fuse: init (API version 7.39) Feb 13 19:02:48.590850 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:02:48.590884 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:02:48.590913 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:02:48.590941 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:02:48.590971 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:02:48.590999 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:02:48.591034 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:02:48.591063 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:02:48.591092 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:02:48.591167 systemd-journald[1486]: Collecting audit messages is disabled. Feb 13 19:02:48.597320 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:02:48.597389 kernel: loop: module loaded Feb 13 19:02:48.597420 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:02:48.597460 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:02:48.597490 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:02:48.597521 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:02:48.597551 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:02:48.597580 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:02:48.597619 systemd-journald[1486]: Journal started Feb 13 19:02:48.597670 systemd-journald[1486]: Runtime Journal (/run/log/journal/ec25b43826db3750a21fde5b82e58b40) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:02:47.998077 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:02:48.605387 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:02:48.052752 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:02:48.053523 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:02:48.615048 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:02:48.615122 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:02:48.638385 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:02:48.644653 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:02:48.644724 kernel: ACPI: bus type drm_connector registered Feb 13 19:02:48.648512 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:02:48.666772 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:02:48.666853 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:02:48.683334 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:02:48.704171 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:02:48.713254 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:02:48.721191 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:02:48.726293 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:02:48.729203 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:02:48.729552 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:02:48.735565 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:02:48.735960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:02:48.739460 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:02:48.742471 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:02:48.745010 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:02:48.749486 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:02:48.765758 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:02:48.799261 kernel: loop0: detected capacity change from 0 to 116808 Feb 13 19:02:48.811990 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:02:48.814413 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:02:48.832556 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:02:48.847653 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:02:48.849963 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:02:48.854746 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:02:48.859347 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:02:48.868153 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:02:48.895338 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:02:48.908587 systemd-journald[1486]: Time spent on flushing to /var/log/journal/ec25b43826db3750a21fde5b82e58b40 is 54.907ms for 897 entries. Feb 13 19:02:48.908587 systemd-journald[1486]: System Journal (/var/log/journal/ec25b43826db3750a21fde5b82e58b40) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:02:48.979804 systemd-journald[1486]: Received client request to flush runtime journal. Feb 13 19:02:48.979876 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:02:48.918393 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:02:48.922215 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:02:48.952588 udevadm[1540]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:02:48.986306 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:02:48.999655 kernel: loop1: detected capacity change from 0 to 53784 Feb 13 19:02:49.018358 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:02:49.028696 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:02:49.110024 systemd-tmpfiles[1549]: ACLs are not supported, ignoring. Feb 13 19:02:49.110639 systemd-tmpfiles[1549]: ACLs are not supported, ignoring. Feb 13 19:02:49.127393 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:02:49.135267 kernel: loop2: detected capacity change from 0 to 113536 Feb 13 19:02:49.239271 kernel: loop3: detected capacity change from 0 to 189592 Feb 13 19:02:49.359667 kernel: loop4: detected capacity change from 0 to 116808 Feb 13 19:02:49.384277 kernel: loop5: detected capacity change from 0 to 53784 Feb 13 19:02:49.399260 kernel: loop6: detected capacity change from 0 to 113536 Feb 13 19:02:49.415269 kernel: loop7: detected capacity change from 0 to 189592 Feb 13 19:02:49.445265 (sd-merge)[1555]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:02:49.446212 (sd-merge)[1555]: Merged extensions into '/usr'. Feb 13 19:02:49.457847 systemd[1]: Reloading requested from client PID 1512 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:02:49.458205 systemd[1]: Reloading... Feb 13 19:02:49.584269 zram_generator::config[1578]: No configuration found. Feb 13 19:02:49.981319 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:50.089571 systemd[1]: Reloading finished in 629 ms. Feb 13 19:02:50.143188 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:02:50.146118 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:02:50.166566 systemd[1]: Starting ensure-sysext.service... Feb 13 19:02:50.173560 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:02:50.190538 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:02:50.202463 systemd[1]: Reloading requested from client PID 1633 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:02:50.202498 systemd[1]: Reloading... Feb 13 19:02:50.270213 systemd-tmpfiles[1634]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:02:50.270911 systemd-tmpfiles[1634]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:02:50.274837 systemd-tmpfiles[1634]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:02:50.278895 systemd-tmpfiles[1634]: ACLs are not supported, ignoring. Feb 13 19:02:50.281495 systemd-tmpfiles[1634]: ACLs are not supported, ignoring. Feb 13 19:02:50.292981 systemd-tmpfiles[1634]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:02:50.293187 systemd-tmpfiles[1634]: Skipping /boot Feb 13 19:02:50.317888 systemd-udevd[1635]: Using default interface naming scheme 'v255'. Feb 13 19:02:50.351590 ldconfig[1508]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:02:50.359360 systemd-tmpfiles[1634]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:02:50.359388 systemd-tmpfiles[1634]: Skipping /boot Feb 13 19:02:50.414256 zram_generator::config[1669]: No configuration found. Feb 13 19:02:50.582574 (udev-worker)[1692]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:02:50.755067 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:50.772284 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1692) Feb 13 19:02:50.911047 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:02:50.911184 systemd[1]: Reloading finished in 708 ms. Feb 13 19:02:50.939110 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:02:50.943311 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:02:50.967298 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:02:51.036919 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:02:51.043990 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:02:51.049696 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:02:51.055992 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:02:51.063576 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:02:51.070549 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:02:51.089609 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:02:51.094542 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:02:51.102753 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:02:51.110207 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:02:51.113506 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:02:51.136262 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:02:51.142051 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:02:51.144598 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:02:51.144965 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:02:51.155712 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:02:51.164699 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:02:51.172323 systemd[1]: Finished ensure-sysext.service. Feb 13 19:02:51.191699 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:02:51.215986 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:02:51.218402 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:02:51.223844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:02:51.226492 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:02:51.263694 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:02:51.273034 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:02:51.276824 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:02:51.286043 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:02:51.288316 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:02:51.298429 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:02:51.300039 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:02:51.304079 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:02:51.366149 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:02:51.376543 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:02:51.377854 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:02:51.381980 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:02:51.393476 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:02:51.417768 augenrules[1874]: No rules Feb 13 19:02:51.418358 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:02:51.419096 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:02:51.423851 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:02:51.424214 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:02:51.459284 lvm[1872]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:02:51.466001 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:02:51.477819 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:02:51.504950 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:02:51.510115 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:51.518482 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:02:51.530617 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:02:51.560255 lvm[1893]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:02:51.609946 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:02:51.622618 systemd-networkd[1831]: lo: Link UP Feb 13 19:02:51.622639 systemd-networkd[1831]: lo: Gained carrier Feb 13 19:02:51.625444 systemd-networkd[1831]: Enumeration completed Feb 13 19:02:51.625613 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:02:51.631018 systemd-networkd[1831]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:02:51.631040 systemd-networkd[1831]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:02:51.633453 systemd-networkd[1831]: eth0: Link UP Feb 13 19:02:51.633812 systemd-networkd[1831]: eth0: Gained carrier Feb 13 19:02:51.633857 systemd-networkd[1831]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:02:51.636546 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:02:51.644352 systemd-networkd[1831]: eth0: DHCPv4 address 172.31.23.196/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:02:51.660815 systemd-resolved[1832]: Positive Trust Anchors: Feb 13 19:02:51.660875 systemd-resolved[1832]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:02:51.660939 systemd-resolved[1832]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:02:51.673044 systemd-resolved[1832]: Defaulting to hostname 'linux'. Feb 13 19:02:51.676097 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:02:51.678345 systemd[1]: Reached target network.target - Network. Feb 13 19:02:51.680091 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:02:51.682294 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:02:51.684460 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:02:51.686849 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:02:51.689605 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:02:51.691800 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:02:51.694057 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:02:51.696298 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:02:51.696346 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:02:51.697984 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:02:51.701031 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:02:51.706793 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:02:51.736480 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:02:51.739441 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:02:51.741609 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:02:51.743482 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:02:51.745427 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:02:51.745484 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:02:51.756397 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:02:51.761478 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:02:51.770756 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:02:51.776490 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:02:51.781689 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:02:51.783676 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:02:51.787983 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:02:51.796200 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:02:51.805630 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:02:51.818747 jq[1902]: false Feb 13 19:02:51.810819 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:02:51.817603 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:02:51.833744 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:02:51.835760 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:02:51.836606 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:02:51.841137 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:02:51.846730 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:02:51.856997 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:02:51.858729 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:02:51.868785 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:02:51.869426 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:02:51.936729 dbus-daemon[1901]: [system] SELinux support is enabled Feb 13 19:02:51.939178 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:02:51.947673 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:02:51.947879 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:02:51.950530 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:02:51.950575 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:02:51.962975 jq[1911]: true Feb 13 19:02:51.988560 (ntainerd)[1926]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:02:51.969134 dbus-daemon[1901]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1831 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:02:51.974180 dbus-daemon[1901]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:02:51.999475 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:02:52.025921 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:02:52.026752 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:02:52.044352 update_engine[1910]: I20250213 19:02:52.043860 1910 main.cc:92] Flatcar Update Engine starting Feb 13 19:02:52.056489 update_engine[1910]: I20250213 19:02:52.049760 1910 update_check_scheduler.cc:74] Next update check in 4m18s Feb 13 19:02:52.051261 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:02:52.061649 extend-filesystems[1903]: Found loop4 Feb 13 19:02:52.061649 extend-filesystems[1903]: Found loop5 Feb 13 19:02:52.061649 extend-filesystems[1903]: Found loop6 Feb 13 19:02:52.061649 extend-filesystems[1903]: Found loop7 Feb 13 19:02:52.061649 extend-filesystems[1903]: Found nvme0n1 Feb 13 19:02:52.061649 extend-filesystems[1903]: Found nvme0n1p1 Feb 13 19:02:52.061649 extend-filesystems[1903]: Found nvme0n1p2 Feb 13 19:02:52.061649 extend-filesystems[1903]: Found nvme0n1p3 Feb 13 19:02:52.061649 extend-filesystems[1903]: Found usr Feb 13 19:02:52.061649 extend-filesystems[1903]: Found nvme0n1p4 Feb 13 19:02:52.061649 extend-filesystems[1903]: Found nvme0n1p6 Feb 13 19:02:52.061649 extend-filesystems[1903]: Found nvme0n1p7 Feb 13 19:02:52.061649 extend-filesystems[1903]: Found nvme0n1p9 Feb 13 19:02:52.061649 extend-filesystems[1903]: Checking size of /dev/nvme0n1p9 Feb 13 19:02:52.059171 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:02:52.113545 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:08:36 UTC 2025 (1): Starting Feb 13 19:02:52.113545 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:02:52.113545 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: ---------------------------------------------------- Feb 13 19:02:52.113545 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:02:52.113545 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:02:52.113545 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: corporation. Support and training for ntp-4 are Feb 13 19:02:52.113545 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: available at https://www.nwtime.org/support Feb 13 19:02:52.113545 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: ---------------------------------------------------- Feb 13 19:02:52.113545 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: proto: precision = 0.096 usec (-23) Feb 13 19:02:52.113545 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: basedate set to 2025-02-01 Feb 13 19:02:52.113545 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: gps base set to 2025-02-02 (week 2352) Feb 13 19:02:52.113545 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:02:52.113545 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:02:52.113545 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:02:52.113545 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: Listen normally on 3 eth0 172.31.23.196:123 Feb 13 19:02:52.113545 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: Listen normally on 4 lo [::1]:123 Feb 13 19:02:52.113545 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: bind(21) AF_INET6 fe80::44f:e3ff:fe35:8779%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:02:52.113545 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: unable to create socket on eth0 (5) for fe80::44f:e3ff:fe35:8779%2#123 Feb 13 19:02:52.113545 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: failed to init interface for address fe80::44f:e3ff:fe35:8779%2 Feb 13 19:02:52.113545 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: Listening on routing socket on fd #21 for interface updates Feb 13 19:02:52.079326 ntpd[1905]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:08:36 UTC 2025 (1): Starting Feb 13 19:02:52.138519 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:02:52.138519 ntpd[1905]: 13 Feb 19:02:52 ntpd[1905]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:02:52.079374 ntpd[1905]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:02:52.079394 ntpd[1905]: ---------------------------------------------------- Feb 13 19:02:52.152827 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:02:52.079414 ntpd[1905]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:02:52.079432 ntpd[1905]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:02:52.079454 ntpd[1905]: corporation. Support and training for ntp-4 are Feb 13 19:02:52.079473 ntpd[1905]: available at https://www.nwtime.org/support Feb 13 19:02:52.079490 ntpd[1905]: ---------------------------------------------------- Feb 13 19:02:52.090553 ntpd[1905]: proto: precision = 0.096 usec (-23) Feb 13 19:02:52.091573 ntpd[1905]: basedate set to 2025-02-01 Feb 13 19:02:52.091607 ntpd[1905]: gps base set to 2025-02-02 (week 2352) Feb 13 19:02:52.100495 ntpd[1905]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:02:52.100570 ntpd[1905]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:02:52.100819 ntpd[1905]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:02:52.100886 ntpd[1905]: Listen normally on 3 eth0 172.31.23.196:123 Feb 13 19:02:52.100955 ntpd[1905]: Listen normally on 4 lo [::1]:123 Feb 13 19:02:52.101025 ntpd[1905]: bind(21) AF_INET6 fe80::44f:e3ff:fe35:8779%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:02:52.101064 ntpd[1905]: unable to create socket on eth0 (5) for fe80::44f:e3ff:fe35:8779%2#123 Feb 13 19:02:52.101092 ntpd[1905]: failed to init interface for address fe80::44f:e3ff:fe35:8779%2 Feb 13 19:02:52.101145 ntpd[1905]: Listening on routing socket on fd #21 for interface updates Feb 13 19:02:52.114151 ntpd[1905]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:02:52.114196 ntpd[1905]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:02:52.176263 jq[1934]: true Feb 13 19:02:52.176494 extend-filesystems[1903]: Resized partition /dev/nvme0n1p9 Feb 13 19:02:52.190831 extend-filesystems[1951]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:02:52.221253 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:02:52.268111 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:02:52.268120 systemd-logind[1909]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:02:52.281445 systemd-logind[1909]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 19:02:52.286799 systemd-logind[1909]: New seat seat0. Feb 13 19:02:52.304250 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1692) Feb 13 19:02:52.345300 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:02:52.366808 extend-filesystems[1951]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:02:52.366808 extend-filesystems[1951]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:02:52.366808 extend-filesystems[1951]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:02:52.386413 extend-filesystems[1903]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:02:52.388401 coreos-metadata[1900]: Feb 13 19:02:52.380 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:02:52.372308 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:02:52.372659 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:02:52.391512 coreos-metadata[1900]: Feb 13 19:02:52.391 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:02:52.392399 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:02:52.395084 coreos-metadata[1900]: Feb 13 19:02:52.394 INFO Fetch successful Feb 13 19:02:52.395570 coreos-metadata[1900]: Feb 13 19:02:52.395 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:02:52.399723 coreos-metadata[1900]: Feb 13 19:02:52.398 INFO Fetch successful Feb 13 19:02:52.400356 coreos-metadata[1900]: Feb 13 19:02:52.400 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:02:52.401397 coreos-metadata[1900]: Feb 13 19:02:52.401 INFO Fetch successful Feb 13 19:02:52.403005 coreos-metadata[1900]: Feb 13 19:02:52.401 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:02:52.403584 locksmithd[1941]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:02:52.404176 coreos-metadata[1900]: Feb 13 19:02:52.403 INFO Fetch successful Feb 13 19:02:52.404176 coreos-metadata[1900]: Feb 13 19:02:52.404 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:02:52.405735 coreos-metadata[1900]: Feb 13 19:02:52.405 INFO Fetch failed with 404: resource not found Feb 13 19:02:52.405735 coreos-metadata[1900]: Feb 13 19:02:52.405 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:02:52.408143 coreos-metadata[1900]: Feb 13 19:02:52.408 INFO Fetch successful Feb 13 19:02:52.408143 coreos-metadata[1900]: Feb 13 19:02:52.408 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:02:52.409777 coreos-metadata[1900]: Feb 13 19:02:52.409 INFO Fetch successful Feb 13 19:02:52.409918 coreos-metadata[1900]: Feb 13 19:02:52.409 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:02:52.410673 coreos-metadata[1900]: Feb 13 19:02:52.410 INFO Fetch successful Feb 13 19:02:52.410673 coreos-metadata[1900]: Feb 13 19:02:52.410 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:02:52.417349 coreos-metadata[1900]: Feb 13 19:02:52.416 INFO Fetch successful Feb 13 19:02:52.417349 coreos-metadata[1900]: Feb 13 19:02:52.416 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:02:52.417923 coreos-metadata[1900]: Feb 13 19:02:52.417 INFO Fetch successful Feb 13 19:02:52.428307 bash[1984]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:02:52.456418 dbus-daemon[1901]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:02:52.457336 dbus-daemon[1901]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1936 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:02:52.464178 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:02:52.469134 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:02:52.481772 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:02:52.488700 systemd[1]: Starting sshkeys.service... Feb 13 19:02:52.525450 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:02:52.530027 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:02:52.548736 polkitd[2012]: Started polkitd version 121 Feb 13 19:02:52.559466 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:02:52.575869 polkitd[2012]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:02:52.576014 polkitd[2012]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:02:52.581450 polkitd[2012]: Finished loading, compiling and executing 2 rules Feb 13 19:02:52.582933 dbus-daemon[1901]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:02:52.583879 polkitd[2012]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:02:52.608310 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:02:52.611745 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:02:52.655208 systemd-resolved[1832]: System hostname changed to 'ip-172-31-23-196'. Feb 13 19:02:52.658024 systemd-hostnamed[1936]: Hostname set to (transient) Feb 13 19:02:52.850518 coreos-metadata[2027]: Feb 13 19:02:52.848 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:02:52.850518 coreos-metadata[2027]: Feb 13 19:02:52.850 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:02:52.855525 coreos-metadata[2027]: Feb 13 19:02:52.855 INFO Fetch successful Feb 13 19:02:52.855525 coreos-metadata[2027]: Feb 13 19:02:52.855 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:02:52.859788 coreos-metadata[2027]: Feb 13 19:02:52.859 INFO Fetch successful Feb 13 19:02:52.865731 unknown[2027]: wrote ssh authorized keys file for user: core Feb 13 19:02:52.873507 systemd-networkd[1831]: eth0: Gained IPv6LL Feb 13 19:02:52.889425 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:02:52.899656 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:02:52.911765 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:02:52.919593 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:52.940668 update-ssh-keys[2091]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:02:52.931513 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:02:52.937340 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:02:52.947994 systemd[1]: Finished sshkeys.service. Feb 13 19:02:52.950808 containerd[1926]: time="2025-02-13T19:02:52.944728403Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:02:53.061312 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:02:53.087801 containerd[1926]: time="2025-02-13T19:02:53.087691904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:53.096267 containerd[1926]: time="2025-02-13T19:02:53.094750352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:53.096267 containerd[1926]: time="2025-02-13T19:02:53.094813316Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:02:53.096267 containerd[1926]: time="2025-02-13T19:02:53.094848500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:02:53.096267 containerd[1926]: time="2025-02-13T19:02:53.095144420Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:02:53.096267 containerd[1926]: time="2025-02-13T19:02:53.095176412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:53.096267 containerd[1926]: time="2025-02-13T19:02:53.095366912Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:53.096267 containerd[1926]: time="2025-02-13T19:02:53.095395160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:53.096267 containerd[1926]: time="2025-02-13T19:02:53.095681540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:53.096267 containerd[1926]: time="2025-02-13T19:02:53.095713172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:53.096267 containerd[1926]: time="2025-02-13T19:02:53.095743568Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:53.096267 containerd[1926]: time="2025-02-13T19:02:53.095769752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:53.096837 containerd[1926]: time="2025-02-13T19:02:53.095963444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:53.096890 amazon-ssm-agent[2093]: Initializing new seelog logger Feb 13 19:02:53.096890 amazon-ssm-agent[2093]: New Seelog Logger Creation Complete Feb 13 19:02:53.096890 amazon-ssm-agent[2093]: 2025/02/13 19:02:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:02:53.096890 amazon-ssm-agent[2093]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:02:53.099258 amazon-ssm-agent[2093]: 2025/02/13 19:02:53 processing appconfig overrides Feb 13 19:02:53.099258 amazon-ssm-agent[2093]: 2025/02/13 19:02:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:02:53.099258 amazon-ssm-agent[2093]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:02:53.099258 amazon-ssm-agent[2093]: 2025/02/13 19:02:53 processing appconfig overrides Feb 13 19:02:53.099258 amazon-ssm-agent[2093]: 2025/02/13 19:02:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:02:53.099258 amazon-ssm-agent[2093]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:02:53.099258 amazon-ssm-agent[2093]: 2025/02/13 19:02:53 processing appconfig overrides Feb 13 19:02:53.099258 amazon-ssm-agent[2093]: 2025-02-13 19:02:53 INFO Proxy environment variables: Feb 13 19:02:53.100260 containerd[1926]: time="2025-02-13T19:02:53.100155488Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:53.100602 containerd[1926]: time="2025-02-13T19:02:53.100512176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:53.101303 containerd[1926]: time="2025-02-13T19:02:53.100698440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:02:53.101303 containerd[1926]: time="2025-02-13T19:02:53.100930952Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:02:53.101303 containerd[1926]: time="2025-02-13T19:02:53.101032748Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:02:53.102459 amazon-ssm-agent[2093]: 2025/02/13 19:02:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:02:53.102459 amazon-ssm-agent[2093]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:02:53.102601 amazon-ssm-agent[2093]: 2025/02/13 19:02:53 processing appconfig overrides Feb 13 19:02:53.111252 containerd[1926]: time="2025-02-13T19:02:53.106796324Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:02:53.111252 containerd[1926]: time="2025-02-13T19:02:53.106888304Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:02:53.111252 containerd[1926]: time="2025-02-13T19:02:53.106927052Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:02:53.111252 containerd[1926]: time="2025-02-13T19:02:53.106964072Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:02:53.111252 containerd[1926]: time="2025-02-13T19:02:53.106996592Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:02:53.111252 containerd[1926]: time="2025-02-13T19:02:53.107286920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:02:53.111252 containerd[1926]: time="2025-02-13T19:02:53.107750216Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:02:53.111252 containerd[1926]: time="2025-02-13T19:02:53.107939912Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:02:53.111252 containerd[1926]: time="2025-02-13T19:02:53.107980004Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:02:53.111252 containerd[1926]: time="2025-02-13T19:02:53.108015236Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:02:53.111252 containerd[1926]: time="2025-02-13T19:02:53.108045488Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:02:53.111252 containerd[1926]: time="2025-02-13T19:02:53.108078596Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:02:53.111252 containerd[1926]: time="2025-02-13T19:02:53.108107708Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:02:53.111252 containerd[1926]: time="2025-02-13T19:02:53.108137312Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:02:53.111914 containerd[1926]: time="2025-02-13T19:02:53.108169568Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:02:53.111914 containerd[1926]: time="2025-02-13T19:02:53.108198812Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:02:53.111914 containerd[1926]: time="2025-02-13T19:02:53.108250388Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:02:53.111914 containerd[1926]: time="2025-02-13T19:02:53.108280928Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:02:53.111914 containerd[1926]: time="2025-02-13T19:02:53.108320492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:02:53.111914 containerd[1926]: time="2025-02-13T19:02:53.108351680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:02:53.111914 containerd[1926]: time="2025-02-13T19:02:53.108381944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:02:53.111914 containerd[1926]: time="2025-02-13T19:02:53.108412736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:02:53.111914 containerd[1926]: time="2025-02-13T19:02:53.108447212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:02:53.111914 containerd[1926]: time="2025-02-13T19:02:53.108478688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:02:53.111914 containerd[1926]: time="2025-02-13T19:02:53.108506468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:02:53.111914 containerd[1926]: time="2025-02-13T19:02:53.108534752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:02:53.111914 containerd[1926]: time="2025-02-13T19:02:53.108565988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:02:53.111914 containerd[1926]: time="2025-02-13T19:02:53.108600896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:02:53.112490 containerd[1926]: time="2025-02-13T19:02:53.108632192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:02:53.112490 containerd[1926]: time="2025-02-13T19:02:53.108660512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:02:53.112490 containerd[1926]: time="2025-02-13T19:02:53.108688472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:02:53.112490 containerd[1926]: time="2025-02-13T19:02:53.108721580Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:02:53.112490 containerd[1926]: time="2025-02-13T19:02:53.108763940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:02:53.112490 containerd[1926]: time="2025-02-13T19:02:53.108794168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:02:53.112490 containerd[1926]: time="2025-02-13T19:02:53.108820160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:02:53.112490 containerd[1926]: time="2025-02-13T19:02:53.108947084Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:02:53.112490 containerd[1926]: time="2025-02-13T19:02:53.108991868Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:02:53.112490 containerd[1926]: time="2025-02-13T19:02:53.109017104Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:02:53.112490 containerd[1926]: time="2025-02-13T19:02:53.109044416Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:02:53.112490 containerd[1926]: time="2025-02-13T19:02:53.109067708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:02:53.112490 containerd[1926]: time="2025-02-13T19:02:53.109103204Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:02:53.112490 containerd[1926]: time="2025-02-13T19:02:53.109126940Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:02:53.113029 containerd[1926]: time="2025-02-13T19:02:53.109154060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:02:53.116039 containerd[1926]: time="2025-02-13T19:02:53.115907720Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:02:53.118974 containerd[1926]: time="2025-02-13T19:02:53.118279892Z" level=info msg="Connect containerd service" Feb 13 19:02:53.118974 containerd[1926]: time="2025-02-13T19:02:53.118394852Z" level=info msg="using legacy CRI server" Feb 13 19:02:53.118974 containerd[1926]: time="2025-02-13T19:02:53.118418672Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:02:53.118974 containerd[1926]: time="2025-02-13T19:02:53.118732112Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:02:53.121283 containerd[1926]: time="2025-02-13T19:02:53.121182860Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:02:53.122326 containerd[1926]: time="2025-02-13T19:02:53.121553252Z" level=info msg="Start subscribing containerd event" Feb 13 19:02:53.122326 containerd[1926]: time="2025-02-13T19:02:53.121620548Z" level=info msg="Start recovering state" Feb 13 19:02:53.122326 containerd[1926]: time="2025-02-13T19:02:53.121749224Z" level=info msg="Start event monitor" Feb 13 19:02:53.122326 containerd[1926]: time="2025-02-13T19:02:53.121772900Z" level=info msg="Start snapshots syncer" Feb 13 19:02:53.122326 containerd[1926]: time="2025-02-13T19:02:53.121793756Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:02:53.122326 containerd[1926]: time="2025-02-13T19:02:53.121815404Z" level=info msg="Start streaming server" Feb 13 19:02:53.124698 containerd[1926]: time="2025-02-13T19:02:53.124512776Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:02:53.124870 containerd[1926]: time="2025-02-13T19:02:53.124842920Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:02:53.130595 containerd[1926]: time="2025-02-13T19:02:53.129332120Z" level=info msg="containerd successfully booted in 0.186958s" Feb 13 19:02:53.129465 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:02:53.199840 amazon-ssm-agent[2093]: 2025-02-13 19:02:53 INFO http_proxy: Feb 13 19:02:53.298243 amazon-ssm-agent[2093]: 2025-02-13 19:02:53 INFO no_proxy: Feb 13 19:02:53.349061 sshd_keygen[1937]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:02:53.395733 amazon-ssm-agent[2093]: 2025-02-13 19:02:53 INFO https_proxy: Feb 13 19:02:53.415548 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:02:53.428675 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:02:53.436794 systemd[1]: Started sshd@0-172.31.23.196:22-147.75.109.163:37920.service - OpenSSH per-connection server daemon (147.75.109.163:37920). Feb 13 19:02:53.470882 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:02:53.472291 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:02:53.486779 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:02:53.495451 amazon-ssm-agent[2093]: 2025-02-13 19:02:53 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:02:53.531894 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:02:53.552031 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:02:53.563805 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:02:53.566795 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:02:53.593849 amazon-ssm-agent[2093]: 2025-02-13 19:02:53 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:02:53.693166 amazon-ssm-agent[2093]: 2025-02-13 19:02:53 INFO Agent will take identity from EC2 Feb 13 19:02:53.717353 sshd[2130]: Accepted publickey for core from 147.75.109.163 port 37920 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:02:53.723323 sshd-session[2130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:53.747165 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:02:53.759728 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:02:53.775175 systemd-logind[1909]: New session 1 of user core. Feb 13 19:02:53.793557 amazon-ssm-agent[2093]: 2025-02-13 19:02:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:02:53.801115 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:02:53.815014 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:02:53.840834 (systemd)[2141]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:02:53.892819 amazon-ssm-agent[2093]: 2025-02-13 19:02:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:02:53.993298 amazon-ssm-agent[2093]: 2025-02-13 19:02:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:02:54.091329 amazon-ssm-agent[2093]: 2025-02-13 19:02:53 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:02:54.097782 systemd[2141]: Queued start job for default target default.target. Feb 13 19:02:54.105993 systemd[2141]: Created slice app.slice - User Application Slice. Feb 13 19:02:54.106509 systemd[2141]: Reached target paths.target - Paths. Feb 13 19:02:54.106548 systemd[2141]: Reached target timers.target - Timers. Feb 13 19:02:54.111465 systemd[2141]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:02:54.135488 systemd[2141]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:02:54.135937 systemd[2141]: Reached target sockets.target - Sockets. Feb 13 19:02:54.136099 systemd[2141]: Reached target basic.target - Basic System. Feb 13 19:02:54.136187 systemd[2141]: Reached target default.target - Main User Target. Feb 13 19:02:54.136281 systemd[2141]: Startup finished in 279ms. Feb 13 19:02:54.136774 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:02:54.148774 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:02:54.191771 amazon-ssm-agent[2093]: 2025-02-13 19:02:53 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 19:02:54.292195 amazon-ssm-agent[2093]: 2025-02-13 19:02:53 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:02:54.310331 systemd[1]: Started sshd@1-172.31.23.196:22-147.75.109.163:37926.service - OpenSSH per-connection server daemon (147.75.109.163:37926). Feb 13 19:02:54.392534 amazon-ssm-agent[2093]: 2025-02-13 19:02:53 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:02:54.492857 amazon-ssm-agent[2093]: 2025-02-13 19:02:53 INFO [Registrar] Starting registrar module Feb 13 19:02:54.519317 sshd[2153]: Accepted publickey for core from 147.75.109.163 port 37926 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:02:54.521963 sshd-session[2153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:54.524562 amazon-ssm-agent[2093]: 2025-02-13 19:02:53 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:02:54.524562 amazon-ssm-agent[2093]: 2025-02-13 19:02:54 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:02:54.524562 amazon-ssm-agent[2093]: 2025-02-13 19:02:54 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:02:54.524562 amazon-ssm-agent[2093]: 2025-02-13 19:02:54 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:02:54.524562 amazon-ssm-agent[2093]: 2025-02-13 19:02:54 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:02:54.529653 systemd-logind[1909]: New session 2 of user core. Feb 13 19:02:54.539490 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:02:54.593420 amazon-ssm-agent[2093]: 2025-02-13 19:02:54 INFO [CredentialRefresher] Next credential rotation will be in 30.608323956433335 minutes Feb 13 19:02:54.668272 sshd[2155]: Connection closed by 147.75.109.163 port 37926 Feb 13 19:02:54.669093 sshd-session[2153]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:54.676756 systemd[1]: sshd@1-172.31.23.196:22-147.75.109.163:37926.service: Deactivated successfully. Feb 13 19:02:54.680811 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:02:54.683416 systemd-logind[1909]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:02:54.685588 systemd-logind[1909]: Removed session 2. Feb 13 19:02:54.707765 systemd[1]: Started sshd@2-172.31.23.196:22-147.75.109.163:37928.service - OpenSSH per-connection server daemon (147.75.109.163:37928). Feb 13 19:02:54.900303 sshd[2160]: Accepted publickey for core from 147.75.109.163 port 37928 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:02:54.902910 sshd-session[2160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:54.913501 systemd-logind[1909]: New session 3 of user core. Feb 13 19:02:54.920506 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:02:55.048261 sshd[2162]: Connection closed by 147.75.109.163 port 37928 Feb 13 19:02:55.049114 sshd-session[2160]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:55.056035 systemd[1]: sshd@2-172.31.23.196:22-147.75.109.163:37928.service: Deactivated successfully. Feb 13 19:02:55.060208 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:02:55.062871 systemd-logind[1909]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:02:55.064921 systemd-logind[1909]: Removed session 3. Feb 13 19:02:55.080080 ntpd[1905]: Listen normally on 6 eth0 [fe80::44f:e3ff:fe35:8779%2]:123 Feb 13 19:02:55.080603 ntpd[1905]: 13 Feb 19:02:55 ntpd[1905]: Listen normally on 6 eth0 [fe80::44f:e3ff:fe35:8779%2]:123 Feb 13 19:02:55.237563 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:55.240845 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:02:55.246388 systemd[1]: Startup finished in 1.076s (kernel) + 7.875s (initrd) + 8.620s (userspace) = 17.573s. Feb 13 19:02:55.251803 (kubelet)[2171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:02:55.552970 amazon-ssm-agent[2093]: 2025-02-13 19:02:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:02:55.654611 amazon-ssm-agent[2093]: 2025-02-13 19:02:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2181) started Feb 13 19:02:55.754417 amazon-ssm-agent[2093]: 2025-02-13 19:02:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:02:56.384684 kubelet[2171]: E0213 19:02:56.384601 2171 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:02:56.387868 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:02:56.388173 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:02:56.388710 systemd[1]: kubelet.service: Consumed 1.244s CPU time. Feb 13 19:03:05.087697 systemd[1]: Started sshd@3-172.31.23.196:22-147.75.109.163:57640.service - OpenSSH per-connection server daemon (147.75.109.163:57640). Feb 13 19:03:05.272436 sshd[2194]: Accepted publickey for core from 147.75.109.163 port 57640 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:05.274834 sshd-session[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:05.281984 systemd-logind[1909]: New session 4 of user core. Feb 13 19:03:05.289753 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:03:05.417558 sshd[2196]: Connection closed by 147.75.109.163 port 57640 Feb 13 19:03:05.418546 sshd-session[2194]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:05.423180 systemd[1]: sshd@3-172.31.23.196:22-147.75.109.163:57640.service: Deactivated successfully. Feb 13 19:03:05.426460 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:03:05.429336 systemd-logind[1909]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:03:05.431051 systemd-logind[1909]: Removed session 4. Feb 13 19:03:05.456734 systemd[1]: Started sshd@4-172.31.23.196:22-147.75.109.163:57646.service - OpenSSH per-connection server daemon (147.75.109.163:57646). Feb 13 19:03:05.646170 sshd[2201]: Accepted publickey for core from 147.75.109.163 port 57646 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:05.648633 sshd-session[2201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:05.655763 systemd-logind[1909]: New session 5 of user core. Feb 13 19:03:05.668452 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:03:05.784267 sshd[2203]: Connection closed by 147.75.109.163 port 57646 Feb 13 19:03:05.785074 sshd-session[2201]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:05.790890 systemd[1]: sshd@4-172.31.23.196:22-147.75.109.163:57646.service: Deactivated successfully. Feb 13 19:03:05.794119 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:03:05.795605 systemd-logind[1909]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:03:05.797627 systemd-logind[1909]: Removed session 5. Feb 13 19:03:05.830940 systemd[1]: Started sshd@5-172.31.23.196:22-147.75.109.163:57654.service - OpenSSH per-connection server daemon (147.75.109.163:57654). Feb 13 19:03:06.012620 sshd[2208]: Accepted publickey for core from 147.75.109.163 port 57654 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:06.015075 sshd-session[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:06.022327 systemd-logind[1909]: New session 6 of user core. Feb 13 19:03:06.030469 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:03:06.158371 sshd[2210]: Connection closed by 147.75.109.163 port 57654 Feb 13 19:03:06.158258 sshd-session[2208]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:06.163320 systemd[1]: sshd@5-172.31.23.196:22-147.75.109.163:57654.service: Deactivated successfully. Feb 13 19:03:06.167725 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:03:06.170689 systemd-logind[1909]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:03:06.172605 systemd-logind[1909]: Removed session 6. Feb 13 19:03:06.196722 systemd[1]: Started sshd@6-172.31.23.196:22-147.75.109.163:57656.service - OpenSSH per-connection server daemon (147.75.109.163:57656). Feb 13 19:03:06.377447 sshd[2215]: Accepted publickey for core from 147.75.109.163 port 57656 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:06.379861 sshd-session[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:06.388053 systemd-logind[1909]: New session 7 of user core. Feb 13 19:03:06.398464 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:03:06.400206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:03:06.407682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:03:06.538756 sudo[2221]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:03:06.539463 sudo[2221]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:03:06.563515 sudo[2221]: pam_unix(sudo:session): session closed for user root Feb 13 19:03:06.590294 sshd[2218]: Connection closed by 147.75.109.163 port 57656 Feb 13 19:03:06.590646 sshd-session[2215]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:06.598675 systemd[1]: sshd@6-172.31.23.196:22-147.75.109.163:57656.service: Deactivated successfully. Feb 13 19:03:06.605015 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:03:06.607016 systemd-logind[1909]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:03:06.630842 systemd[1]: Started sshd@7-172.31.23.196:22-147.75.109.163:57660.service - OpenSSH per-connection server daemon (147.75.109.163:57660). Feb 13 19:03:06.633238 systemd-logind[1909]: Removed session 7. Feb 13 19:03:06.743287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:03:06.759825 (kubelet)[2233]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:03:06.827638 sshd[2226]: Accepted publickey for core from 147.75.109.163 port 57660 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:06.829241 sshd-session[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:06.836799 systemd-logind[1909]: New session 8 of user core. Feb 13 19:03:06.847037 kubelet[2233]: E0213 19:03:06.846968 2233 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:03:06.847494 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:03:06.857400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:03:06.857751 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:03:06.956386 sudo[2242]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:03:06.957014 sudo[2242]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:03:06.962995 sudo[2242]: pam_unix(sudo:session): session closed for user root Feb 13 19:03:06.972670 sudo[2241]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:03:06.973348 sudo[2241]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:03:06.994475 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:03:07.050705 augenrules[2264]: No rules Feb 13 19:03:07.053006 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:03:07.053489 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:03:07.057740 sudo[2241]: pam_unix(sudo:session): session closed for user root Feb 13 19:03:07.080284 sshd[2239]: Connection closed by 147.75.109.163 port 57660 Feb 13 19:03:07.081023 sshd-session[2226]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:07.087914 systemd[1]: sshd@7-172.31.23.196:22-147.75.109.163:57660.service: Deactivated successfully. Feb 13 19:03:07.090875 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:03:07.092178 systemd-logind[1909]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:03:07.094048 systemd-logind[1909]: Removed session 8. Feb 13 19:03:07.119731 systemd[1]: Started sshd@8-172.31.23.196:22-147.75.109.163:57672.service - OpenSSH per-connection server daemon (147.75.109.163:57672). Feb 13 19:03:07.309306 sshd[2272]: Accepted publickey for core from 147.75.109.163 port 57672 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:07.311390 sshd-session[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:07.318529 systemd-logind[1909]: New session 9 of user core. Feb 13 19:03:07.330444 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:03:07.433289 sudo[2275]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:03:07.433889 sudo[2275]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:03:08.328295 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:03:08.340698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:03:08.390704 systemd[1]: Reloading requested from client PID 2307 ('systemctl') (unit session-9.scope)... Feb 13 19:03:08.390904 systemd[1]: Reloading... Feb 13 19:03:08.599291 zram_generator::config[2349]: No configuration found. Feb 13 19:03:08.855194 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:03:09.015721 systemd[1]: Reloading finished in 624 ms. Feb 13 19:03:09.111242 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:03:09.111481 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:03:09.112342 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:03:09.121516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:03:09.408538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:03:09.427003 (kubelet)[2411]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:03:09.491206 kubelet[2411]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:03:09.491206 kubelet[2411]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:03:09.491206 kubelet[2411]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:03:09.491772 kubelet[2411]: I0213 19:03:09.491337 2411 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:03:10.598273 kubelet[2411]: I0213 19:03:10.597489 2411 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:03:10.598273 kubelet[2411]: I0213 19:03:10.597537 2411 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:03:10.599004 kubelet[2411]: I0213 19:03:10.598968 2411 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:03:10.644773 kubelet[2411]: I0213 19:03:10.644728 2411 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:03:10.656893 kubelet[2411]: E0213 19:03:10.656832 2411 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:03:10.657175 kubelet[2411]: I0213 19:03:10.657152 2411 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:03:10.663815 kubelet[2411]: I0213 19:03:10.663474 2411 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:03:10.664793 kubelet[2411]: I0213 19:03:10.664765 2411 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:03:10.665297 kubelet[2411]: I0213 19:03:10.665216 2411 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:03:10.665706 kubelet[2411]: I0213 19:03:10.665393 2411 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.23.196","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:03:10.666308 kubelet[2411]: I0213 19:03:10.665923 2411 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:03:10.666308 kubelet[2411]: I0213 19:03:10.665951 2411 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:03:10.666308 kubelet[2411]: I0213 19:03:10.666145 2411 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:03:10.667917 kubelet[2411]: I0213 19:03:10.667882 2411 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:03:10.668077 kubelet[2411]: I0213 19:03:10.668057 2411 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:03:10.668755 kubelet[2411]: I0213 19:03:10.668259 2411 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:03:10.668755 kubelet[2411]: I0213 19:03:10.668304 2411 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:03:10.668755 kubelet[2411]: E0213 19:03:10.668382 2411 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:10.668755 kubelet[2411]: E0213 19:03:10.668557 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:10.674175 kubelet[2411]: I0213 19:03:10.674093 2411 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:03:10.677084 kubelet[2411]: I0213 19:03:10.677023 2411 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:03:10.677192 kubelet[2411]: W0213 19:03:10.677159 2411 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:03:10.678290 kubelet[2411]: I0213 19:03:10.678245 2411 server.go:1269] "Started kubelet" Feb 13 19:03:10.679996 kubelet[2411]: I0213 19:03:10.679438 2411 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:03:10.681399 kubelet[2411]: I0213 19:03:10.681320 2411 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:03:10.685259 kubelet[2411]: I0213 19:03:10.684577 2411 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:03:10.685259 kubelet[2411]: I0213 19:03:10.685048 2411 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:03:10.685865 kubelet[2411]: I0213 19:03:10.685820 2411 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:03:10.692932 kubelet[2411]: I0213 19:03:10.691736 2411 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:03:10.699689 kubelet[2411]: E0213 19:03:10.697662 2411 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.23.196.1823d9d52b9e1965 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.23.196,UID:172.31.23.196,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.23.196,},FirstTimestamp:2025-02-13 19:03:10.678186341 +0000 UTC m=+1.245528065,LastTimestamp:2025-02-13 19:03:10.678186341 +0000 UTC m=+1.245528065,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.23.196,}" Feb 13 19:03:10.700047 kubelet[2411]: W0213 19:03:10.700018 2411 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.23.196" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:03:10.700215 kubelet[2411]: E0213 19:03:10.700186 2411 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.23.196\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:03:10.700547 kubelet[2411]: W0213 19:03:10.700520 2411 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:03:10.700687 kubelet[2411]: E0213 19:03:10.700659 2411 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:03:10.701350 kubelet[2411]: E0213 19:03:10.701315 2411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.196\" not found" Feb 13 19:03:10.702072 kubelet[2411]: I0213 19:03:10.702048 2411 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:03:10.702554 kubelet[2411]: I0213 19:03:10.702528 2411 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:03:10.702768 kubelet[2411]: I0213 19:03:10.702749 2411 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:03:10.704006 kubelet[2411]: E0213 19:03:10.703952 2411 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:03:10.705638 kubelet[2411]: I0213 19:03:10.705591 2411 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:03:10.705964 kubelet[2411]: I0213 19:03:10.705920 2411 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:03:10.717276 kubelet[2411]: I0213 19:03:10.716837 2411 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:03:10.739386 kubelet[2411]: E0213 19:03:10.739094 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.23.196\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 19:03:10.740208 kubelet[2411]: W0213 19:03:10.740068 2411 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:03:10.740469 kubelet[2411]: E0213 19:03:10.740408 2411 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 19:03:10.740833 kubelet[2411]: E0213 19:03:10.740673 2411 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.23.196.1823d9d52d26f022 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.23.196,UID:172.31.23.196,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.23.196,},FirstTimestamp:2025-02-13 19:03:10.703931426 +0000 UTC m=+1.271273138,LastTimestamp:2025-02-13 19:03:10.703931426 +0000 UTC m=+1.271273138,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.23.196,}" Feb 13 19:03:10.755323 kubelet[2411]: I0213 19:03:10.755119 2411 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:03:10.755323 kubelet[2411]: I0213 19:03:10.755151 2411 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:03:10.755323 kubelet[2411]: I0213 19:03:10.755180 2411 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:03:10.757048 kubelet[2411]: E0213 19:03:10.756763 2411 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.23.196.1823d9d5301d53cf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.23.196,UID:172.31.23.196,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.23.196 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.23.196,},FirstTimestamp:2025-02-13 19:03:10.753633231 +0000 UTC m=+1.320974943,LastTimestamp:2025-02-13 19:03:10.753633231 +0000 UTC m=+1.320974943,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.23.196,}" Feb 13 19:03:10.759250 kubelet[2411]: I0213 19:03:10.759122 2411 policy_none.go:49] "None policy: Start" Feb 13 19:03:10.761853 kubelet[2411]: I0213 19:03:10.761774 2411 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:03:10.761853 kubelet[2411]: I0213 19:03:10.761823 2411 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:03:10.777543 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:03:10.799751 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:03:10.802867 kubelet[2411]: E0213 19:03:10.802835 2411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.196\" not found" Feb 13 19:03:10.810184 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:03:10.813906 kubelet[2411]: I0213 19:03:10.813835 2411 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:03:10.816076 kubelet[2411]: I0213 19:03:10.816030 2411 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:03:10.816076 kubelet[2411]: I0213 19:03:10.816075 2411 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:03:10.816309 kubelet[2411]: I0213 19:03:10.816105 2411 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:03:10.816309 kubelet[2411]: E0213 19:03:10.816284 2411 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:03:10.824266 kubelet[2411]: I0213 19:03:10.822908 2411 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:03:10.824266 kubelet[2411]: I0213 19:03:10.823314 2411 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:03:10.824266 kubelet[2411]: I0213 19:03:10.823338 2411 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:03:10.827429 kubelet[2411]: I0213 19:03:10.827387 2411 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:03:10.832579 kubelet[2411]: E0213 19:03:10.832513 2411 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.23.196\" not found" Feb 13 19:03:10.925415 kubelet[2411]: I0213 19:03:10.925261 2411 kubelet_node_status.go:72] "Attempting to register node" node="172.31.23.196" Feb 13 19:03:10.932401 kubelet[2411]: I0213 19:03:10.932350 2411 kubelet_node_status.go:75] "Successfully registered node" node="172.31.23.196" Feb 13 19:03:10.932401 kubelet[2411]: E0213 19:03:10.932402 2411 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.23.196\": node \"172.31.23.196\" not found" Feb 13 19:03:10.969293 kubelet[2411]: E0213 19:03:10.969212 2411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.196\" not found" Feb 13 19:03:11.016872 sudo[2275]: pam_unix(sudo:session): session closed for user root Feb 13 19:03:11.039281 sshd[2274]: Connection closed by 147.75.109.163 port 57672 Feb 13 19:03:11.040138 sshd-session[2272]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:11.045813 systemd[1]: sshd@8-172.31.23.196:22-147.75.109.163:57672.service: Deactivated successfully. Feb 13 19:03:11.049776 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:03:11.051730 systemd-logind[1909]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:03:11.054575 systemd-logind[1909]: Removed session 9. Feb 13 19:03:11.069631 kubelet[2411]: E0213 19:03:11.069577 2411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.196\" not found" Feb 13 19:03:11.170211 kubelet[2411]: E0213 19:03:11.170165 2411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.196\" not found" Feb 13 19:03:11.271215 kubelet[2411]: E0213 19:03:11.271086 2411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.196\" not found" Feb 13 19:03:11.371705 kubelet[2411]: E0213 19:03:11.371662 2411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.196\" not found" Feb 13 19:03:11.472324 kubelet[2411]: E0213 19:03:11.472276 2411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.196\" not found" Feb 13 19:03:11.574682 kubelet[2411]: I0213 19:03:11.573893 2411 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:03:11.574815 containerd[1926]: time="2025-02-13T19:03:11.574368464Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:03:11.575365 kubelet[2411]: I0213 19:03:11.575042 2411 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:03:11.603291 kubelet[2411]: I0213 19:03:11.603244 2411 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:03:11.603795 kubelet[2411]: W0213 19:03:11.603445 2411 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:03:11.603795 kubelet[2411]: W0213 19:03:11.603498 2411 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:03:11.669092 kubelet[2411]: I0213 19:03:11.669045 2411 apiserver.go:52] "Watching apiserver" Feb 13 19:03:11.669278 kubelet[2411]: E0213 19:03:11.669214 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:11.683236 kubelet[2411]: E0213 19:03:11.683154 2411 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l4mms" podUID="099bb73c-5027-4347-b5a3-d47745bfcefe" Feb 13 19:03:11.698714 systemd[1]: Created slice kubepods-besteffort-pod4a33011a_608e_4c49_9353_4360b36df732.slice - libcontainer container kubepods-besteffort-pod4a33011a_608e_4c49_9353_4360b36df732.slice. Feb 13 19:03:11.707666 kubelet[2411]: I0213 19:03:11.707544 2411 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:03:11.708505 kubelet[2411]: I0213 19:03:11.708463 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/330ee996-d399-411a-8fdf-f83d742359a5-xtables-lock\") pod \"kube-proxy-f4svj\" (UID: \"330ee996-d399-411a-8fdf-f83d742359a5\") " pod="kube-system/kube-proxy-f4svj" Feb 13 19:03:11.708726 kubelet[2411]: I0213 19:03:11.708698 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk9qm\" (UniqueName: \"kubernetes.io/projected/330ee996-d399-411a-8fdf-f83d742359a5-kube-api-access-dk9qm\") pod \"kube-proxy-f4svj\" (UID: \"330ee996-d399-411a-8fdf-f83d742359a5\") " pod="kube-system/kube-proxy-f4svj" Feb 13 19:03:11.708894 kubelet[2411]: I0213 19:03:11.708864 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a33011a-608e-4c49-9353-4360b36df732-lib-modules\") pod \"calico-node-gn7vj\" (UID: \"4a33011a-608e-4c49-9353-4360b36df732\") " pod="calico-system/calico-node-gn7vj" Feb 13 19:03:11.709058 kubelet[2411]: I0213 19:03:11.709034 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a33011a-608e-4c49-9353-4360b36df732-tigera-ca-bundle\") pod \"calico-node-gn7vj\" (UID: \"4a33011a-608e-4c49-9353-4360b36df732\") " pod="calico-system/calico-node-gn7vj" Feb 13 19:03:11.709284 kubelet[2411]: I0213 19:03:11.709162 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/099bb73c-5027-4347-b5a3-d47745bfcefe-varrun\") pod \"csi-node-driver-l4mms\" (UID: \"099bb73c-5027-4347-b5a3-d47745bfcefe\") " pod="calico-system/csi-node-driver-l4mms" Feb 13 19:03:11.709284 kubelet[2411]: I0213 19:03:11.709242 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/099bb73c-5027-4347-b5a3-d47745bfcefe-registration-dir\") pod \"csi-node-driver-l4mms\" (UID: \"099bb73c-5027-4347-b5a3-d47745bfcefe\") " pod="calico-system/csi-node-driver-l4mms" Feb 13 19:03:11.710251 kubelet[2411]: I0213 19:03:11.709468 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/330ee996-d399-411a-8fdf-f83d742359a5-kube-proxy\") pod \"kube-proxy-f4svj\" (UID: \"330ee996-d399-411a-8fdf-f83d742359a5\") " pod="kube-system/kube-proxy-f4svj" Feb 13 19:03:11.710251 kubelet[2411]: I0213 19:03:11.709527 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4a33011a-608e-4c49-9353-4360b36df732-var-lib-calico\") pod \"calico-node-gn7vj\" (UID: \"4a33011a-608e-4c49-9353-4360b36df732\") " pod="calico-system/calico-node-gn7vj" Feb 13 19:03:11.710251 kubelet[2411]: I0213 19:03:11.709579 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4a33011a-608e-4c49-9353-4360b36df732-var-run-calico\") pod \"calico-node-gn7vj\" (UID: \"4a33011a-608e-4c49-9353-4360b36df732\") " pod="calico-system/calico-node-gn7vj" Feb 13 19:03:11.710251 kubelet[2411]: I0213 19:03:11.709621 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4a33011a-608e-4c49-9353-4360b36df732-flexvol-driver-host\") pod \"calico-node-gn7vj\" (UID: \"4a33011a-608e-4c49-9353-4360b36df732\") " pod="calico-system/calico-node-gn7vj" Feb 13 19:03:11.710251 kubelet[2411]: I0213 19:03:11.709670 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqzmt\" (UniqueName: \"kubernetes.io/projected/4a33011a-608e-4c49-9353-4360b36df732-kube-api-access-bqzmt\") pod \"calico-node-gn7vj\" (UID: \"4a33011a-608e-4c49-9353-4360b36df732\") " pod="calico-system/calico-node-gn7vj" Feb 13 19:03:11.710577 kubelet[2411]: I0213 19:03:11.709717 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttxbv\" (UniqueName: \"kubernetes.io/projected/099bb73c-5027-4347-b5a3-d47745bfcefe-kube-api-access-ttxbv\") pod \"csi-node-driver-l4mms\" (UID: \"099bb73c-5027-4347-b5a3-d47745bfcefe\") " pod="calico-system/csi-node-driver-l4mms" Feb 13 19:03:11.710577 kubelet[2411]: I0213 19:03:11.709794 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/330ee996-d399-411a-8fdf-f83d742359a5-lib-modules\") pod \"kube-proxy-f4svj\" (UID: \"330ee996-d399-411a-8fdf-f83d742359a5\") " pod="kube-system/kube-proxy-f4svj" Feb 13 19:03:11.710577 kubelet[2411]: I0213 19:03:11.709835 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4a33011a-608e-4c49-9353-4360b36df732-policysync\") pod \"calico-node-gn7vj\" (UID: \"4a33011a-608e-4c49-9353-4360b36df732\") " pod="calico-system/calico-node-gn7vj" Feb 13 19:03:11.710577 kubelet[2411]: I0213 19:03:11.709880 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4a33011a-608e-4c49-9353-4360b36df732-node-certs\") pod \"calico-node-gn7vj\" (UID: \"4a33011a-608e-4c49-9353-4360b36df732\") " pod="calico-system/calico-node-gn7vj" Feb 13 19:03:11.710577 kubelet[2411]: I0213 19:03:11.709931 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4a33011a-608e-4c49-9353-4360b36df732-cni-bin-dir\") pod \"calico-node-gn7vj\" (UID: \"4a33011a-608e-4c49-9353-4360b36df732\") " pod="calico-system/calico-node-gn7vj" Feb 13 19:03:11.710811 kubelet[2411]: I0213 19:03:11.709978 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4a33011a-608e-4c49-9353-4360b36df732-cni-net-dir\") pod \"calico-node-gn7vj\" (UID: \"4a33011a-608e-4c49-9353-4360b36df732\") " pod="calico-system/calico-node-gn7vj" Feb 13 19:03:11.710811 kubelet[2411]: I0213 19:03:11.710028 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4a33011a-608e-4c49-9353-4360b36df732-cni-log-dir\") pod \"calico-node-gn7vj\" (UID: \"4a33011a-608e-4c49-9353-4360b36df732\") " pod="calico-system/calico-node-gn7vj" Feb 13 19:03:11.710811 kubelet[2411]: I0213 19:03:11.710078 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/099bb73c-5027-4347-b5a3-d47745bfcefe-kubelet-dir\") pod \"csi-node-driver-l4mms\" (UID: \"099bb73c-5027-4347-b5a3-d47745bfcefe\") " pod="calico-system/csi-node-driver-l4mms" Feb 13 19:03:11.710811 kubelet[2411]: I0213 19:03:11.710119 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/099bb73c-5027-4347-b5a3-d47745bfcefe-socket-dir\") pod \"csi-node-driver-l4mms\" (UID: \"099bb73c-5027-4347-b5a3-d47745bfcefe\") " pod="calico-system/csi-node-driver-l4mms" Feb 13 19:03:11.710811 kubelet[2411]: I0213 19:03:11.710165 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a33011a-608e-4c49-9353-4360b36df732-xtables-lock\") pod \"calico-node-gn7vj\" (UID: \"4a33011a-608e-4c49-9353-4360b36df732\") " pod="calico-system/calico-node-gn7vj" Feb 13 19:03:11.721202 systemd[1]: Created slice kubepods-besteffort-pod330ee996_d399_411a_8fdf_f83d742359a5.slice - libcontainer container kubepods-besteffort-pod330ee996_d399_411a_8fdf_f83d742359a5.slice. Feb 13 19:03:11.812699 kubelet[2411]: E0213 19:03:11.812659 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.813034 kubelet[2411]: W0213 19:03:11.812829 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.813034 kubelet[2411]: E0213 19:03:11.812885 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.813516 kubelet[2411]: E0213 19:03:11.813355 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.813516 kubelet[2411]: W0213 19:03:11.813389 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.813516 kubelet[2411]: E0213 19:03:11.813411 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.814154 kubelet[2411]: E0213 19:03:11.813933 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.814154 kubelet[2411]: W0213 19:03:11.813953 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.814154 kubelet[2411]: E0213 19:03:11.813998 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.814889 kubelet[2411]: E0213 19:03:11.814678 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.814889 kubelet[2411]: W0213 19:03:11.814701 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.814889 kubelet[2411]: E0213 19:03:11.814835 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.815663 kubelet[2411]: E0213 19:03:11.815454 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.815663 kubelet[2411]: W0213 19:03:11.815475 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.815663 kubelet[2411]: E0213 19:03:11.815589 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.815916 kubelet[2411]: E0213 19:03:11.815888 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.815985 kubelet[2411]: W0213 19:03:11.815914 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.816212 kubelet[2411]: E0213 19:03:11.816080 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.816355 kubelet[2411]: E0213 19:03:11.816308 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.816355 kubelet[2411]: W0213 19:03:11.816341 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.817754 kubelet[2411]: E0213 19:03:11.816644 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.817754 kubelet[2411]: W0213 19:03:11.816669 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.817754 kubelet[2411]: E0213 19:03:11.816918 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.817754 kubelet[2411]: W0213 19:03:11.816931 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.817754 kubelet[2411]: E0213 19:03:11.817207 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.817754 kubelet[2411]: W0213 19:03:11.817256 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.817754 kubelet[2411]: E0213 19:03:11.817509 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.817754 kubelet[2411]: W0213 19:03:11.817526 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.817754 kubelet[2411]: E0213 19:03:11.817764 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.818237 kubelet[2411]: W0213 19:03:11.817777 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.818237 kubelet[2411]: E0213 19:03:11.818017 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.818237 kubelet[2411]: W0213 19:03:11.818030 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.818397 kubelet[2411]: E0213 19:03:11.818300 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.818397 kubelet[2411]: W0213 19:03:11.818314 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.819699 kubelet[2411]: E0213 19:03:11.818566 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.819699 kubelet[2411]: W0213 19:03:11.818592 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.819699 kubelet[2411]: E0213 19:03:11.818714 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.819699 kubelet[2411]: E0213 19:03:11.818756 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.819699 kubelet[2411]: E0213 19:03:11.818800 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.819699 kubelet[2411]: E0213 19:03:11.818825 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.819699 kubelet[2411]: E0213 19:03:11.818850 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.819699 kubelet[2411]: E0213 19:03:11.818861 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.819699 kubelet[2411]: E0213 19:03:11.818874 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.819699 kubelet[2411]: W0213 19:03:11.818876 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.820255 kubelet[2411]: E0213 19:03:11.818887 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.820255 kubelet[2411]: E0213 19:03:11.818914 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.820255 kubelet[2411]: E0213 19:03:11.818936 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.820255 kubelet[2411]: E0213 19:03:11.818957 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.820255 kubelet[2411]: E0213 19:03:11.819511 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.820255 kubelet[2411]: W0213 19:03:11.819532 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.820255 kubelet[2411]: E0213 19:03:11.819595 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.820877 kubelet[2411]: E0213 19:03:11.820685 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.820877 kubelet[2411]: W0213 19:03:11.820719 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.821165 kubelet[2411]: E0213 19:03:11.821011 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.821516 kubelet[2411]: E0213 19:03:11.821350 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.821516 kubelet[2411]: W0213 19:03:11.821372 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.821516 kubelet[2411]: E0213 19:03:11.821400 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.821813 kubelet[2411]: E0213 19:03:11.821792 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.821910 kubelet[2411]: W0213 19:03:11.821888 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.822162 kubelet[2411]: E0213 19:03:11.822037 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.822517 kubelet[2411]: E0213 19:03:11.822354 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.822517 kubelet[2411]: W0213 19:03:11.822375 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.822517 kubelet[2411]: E0213 19:03:11.822415 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.822783 kubelet[2411]: E0213 19:03:11.822763 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.822894 kubelet[2411]: W0213 19:03:11.822873 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.823026 kubelet[2411]: E0213 19:03:11.822991 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.823394 kubelet[2411]: E0213 19:03:11.823372 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.823648 kubelet[2411]: W0213 19:03:11.823495 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.823648 kubelet[2411]: E0213 19:03:11.823544 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.823864 kubelet[2411]: E0213 19:03:11.823844 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.823955 kubelet[2411]: W0213 19:03:11.823934 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.824090 kubelet[2411]: E0213 19:03:11.824056 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.824488 kubelet[2411]: E0213 19:03:11.824466 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.824787 kubelet[2411]: W0213 19:03:11.824568 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.824787 kubelet[2411]: E0213 19:03:11.824615 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.825872 kubelet[2411]: E0213 19:03:11.825631 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.825872 kubelet[2411]: W0213 19:03:11.825657 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.825872 kubelet[2411]: E0213 19:03:11.825712 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.826749 kubelet[2411]: E0213 19:03:11.826525 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.826749 kubelet[2411]: W0213 19:03:11.826548 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.826749 kubelet[2411]: E0213 19:03:11.826599 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.827532 kubelet[2411]: E0213 19:03:11.827455 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.827865 kubelet[2411]: W0213 19:03:11.827663 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.827865 kubelet[2411]: E0213 19:03:11.827728 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.828505 kubelet[2411]: E0213 19:03:11.828389 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.828505 kubelet[2411]: W0213 19:03:11.828412 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.828505 kubelet[2411]: E0213 19:03:11.828468 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.829025 kubelet[2411]: E0213 19:03:11.828995 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.829102 kubelet[2411]: W0213 19:03:11.829023 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.829974 kubelet[2411]: E0213 19:03:11.829771 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.829974 kubelet[2411]: W0213 19:03:11.829795 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.830288 kubelet[2411]: E0213 19:03:11.830263 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.830621 kubelet[2411]: E0213 19:03:11.830296 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.830935 kubelet[2411]: E0213 19:03:11.830800 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.830935 kubelet[2411]: W0213 19:03:11.830889 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.831306 kubelet[2411]: E0213 19:03:11.831165 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.831512 kubelet[2411]: E0213 19:03:11.831491 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.831733 kubelet[2411]: W0213 19:03:11.831588 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.831867 kubelet[2411]: E0213 19:03:11.831843 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.832168 kubelet[2411]: E0213 19:03:11.832014 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.832168 kubelet[2411]: W0213 19:03:11.832035 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.832168 kubelet[2411]: E0213 19:03:11.832080 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.832612 kubelet[2411]: E0213 19:03:11.832511 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.832612 kubelet[2411]: W0213 19:03:11.832532 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.832612 kubelet[2411]: E0213 19:03:11.832575 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.833137 kubelet[2411]: E0213 19:03:11.833041 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.833137 kubelet[2411]: W0213 19:03:11.833060 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.833137 kubelet[2411]: E0213 19:03:11.833101 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.833552 kubelet[2411]: E0213 19:03:11.833533 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.833727 kubelet[2411]: W0213 19:03:11.833642 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.833727 kubelet[2411]: E0213 19:03:11.833689 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.834309 kubelet[2411]: E0213 19:03:11.834107 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.834309 kubelet[2411]: W0213 19:03:11.834126 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.834309 kubelet[2411]: E0213 19:03:11.834166 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.834690 kubelet[2411]: E0213 19:03:11.834593 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.834690 kubelet[2411]: W0213 19:03:11.834612 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.834690 kubelet[2411]: E0213 19:03:11.834653 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.835436 kubelet[2411]: E0213 19:03:11.835264 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.835436 kubelet[2411]: W0213 19:03:11.835287 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.835436 kubelet[2411]: E0213 19:03:11.835329 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.835815 kubelet[2411]: E0213 19:03:11.835714 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.835815 kubelet[2411]: W0213 19:03:11.835735 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.835815 kubelet[2411]: E0213 19:03:11.835778 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.836518 kubelet[2411]: E0213 19:03:11.836353 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.836518 kubelet[2411]: W0213 19:03:11.836374 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.836518 kubelet[2411]: E0213 19:03:11.836421 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.836896 kubelet[2411]: E0213 19:03:11.836798 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.836896 kubelet[2411]: W0213 19:03:11.836818 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.836896 kubelet[2411]: E0213 19:03:11.836861 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.837500 kubelet[2411]: E0213 19:03:11.837348 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.837500 kubelet[2411]: W0213 19:03:11.837369 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.837500 kubelet[2411]: E0213 19:03:11.837411 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.837861 kubelet[2411]: E0213 19:03:11.837768 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.837861 kubelet[2411]: W0213 19:03:11.837786 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.837861 kubelet[2411]: E0213 19:03:11.837826 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.838420 kubelet[2411]: E0213 19:03:11.838320 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.838420 kubelet[2411]: W0213 19:03:11.838339 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.838420 kubelet[2411]: E0213 19:03:11.838379 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.839043 kubelet[2411]: E0213 19:03:11.838864 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.839043 kubelet[2411]: W0213 19:03:11.838883 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.839043 kubelet[2411]: E0213 19:03:11.838926 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.839587 kubelet[2411]: E0213 19:03:11.839426 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.839587 kubelet[2411]: W0213 19:03:11.839446 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.839825 kubelet[2411]: E0213 19:03:11.839710 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.840049 kubelet[2411]: E0213 19:03:11.839955 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.840049 kubelet[2411]: W0213 19:03:11.839973 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.840049 kubelet[2411]: E0213 19:03:11.840014 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.840629 kubelet[2411]: E0213 19:03:11.840528 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.840629 kubelet[2411]: W0213 19:03:11.840548 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.840629 kubelet[2411]: E0213 19:03:11.840590 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.841204 kubelet[2411]: E0213 19:03:11.841042 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.841204 kubelet[2411]: W0213 19:03:11.841060 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.841204 kubelet[2411]: E0213 19:03:11.841101 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.841705 kubelet[2411]: E0213 19:03:11.841549 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.841705 kubelet[2411]: W0213 19:03:11.841571 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.841705 kubelet[2411]: E0213 19:03:11.841615 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.842152 kubelet[2411]: E0213 19:03:11.842054 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.842152 kubelet[2411]: W0213 19:03:11.842074 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.842152 kubelet[2411]: E0213 19:03:11.842116 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.842633 kubelet[2411]: E0213 19:03:11.842612 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.842788 kubelet[2411]: W0213 19:03:11.842704 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.842788 kubelet[2411]: E0213 19:03:11.842755 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.843354 kubelet[2411]: E0213 19:03:11.843161 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.843354 kubelet[2411]: W0213 19:03:11.843182 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.843354 kubelet[2411]: E0213 19:03:11.843256 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.843786 kubelet[2411]: E0213 19:03:11.843657 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.843786 kubelet[2411]: W0213 19:03:11.843675 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.843786 kubelet[2411]: E0213 19:03:11.843717 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.844364 kubelet[2411]: E0213 19:03:11.844207 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.844364 kubelet[2411]: W0213 19:03:11.844264 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.844364 kubelet[2411]: E0213 19:03:11.844319 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.844962 kubelet[2411]: E0213 19:03:11.844802 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.844962 kubelet[2411]: W0213 19:03:11.844821 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.844962 kubelet[2411]: E0213 19:03:11.844863 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.845562 kubelet[2411]: E0213 19:03:11.845379 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.845562 kubelet[2411]: W0213 19:03:11.845405 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.846119 kubelet[2411]: E0213 19:03:11.845916 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.846119 kubelet[2411]: W0213 19:03:11.845945 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.846298 kubelet[2411]: E0213 19:03:11.846262 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.846593 kubelet[2411]: E0213 19:03:11.846571 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.847139 kubelet[2411]: W0213 19:03:11.846695 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.847139 kubelet[2411]: E0213 19:03:11.847119 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.847364 kubelet[2411]: E0213 19:03:11.847166 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.847970 kubelet[2411]: E0213 19:03:11.847766 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.847970 kubelet[2411]: W0213 19:03:11.847794 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.848181 kubelet[2411]: E0213 19:03:11.848156 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.848546 kubelet[2411]: E0213 19:03:11.848417 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.848546 kubelet[2411]: W0213 19:03:11.848437 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.848546 kubelet[2411]: E0213 19:03:11.848480 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.849154 kubelet[2411]: E0213 19:03:11.849017 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.849154 kubelet[2411]: W0213 19:03:11.849038 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.849154 kubelet[2411]: E0213 19:03:11.849084 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.849594 kubelet[2411]: E0213 19:03:11.849574 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.849689 kubelet[2411]: W0213 19:03:11.849667 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.849875 kubelet[2411]: E0213 19:03:11.849798 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.850373 kubelet[2411]: E0213 19:03:11.850212 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.850373 kubelet[2411]: W0213 19:03:11.850272 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.850373 kubelet[2411]: E0213 19:03:11.850314 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.850918 kubelet[2411]: E0213 19:03:11.850787 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.850918 kubelet[2411]: W0213 19:03:11.850806 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.850918 kubelet[2411]: E0213 19:03:11.850847 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.851701 kubelet[2411]: E0213 19:03:11.851506 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.851701 kubelet[2411]: W0213 19:03:11.851530 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.851701 kubelet[2411]: E0213 19:03:11.851576 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.852183 kubelet[2411]: E0213 19:03:11.852074 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.852183 kubelet[2411]: W0213 19:03:11.852095 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.852183 kubelet[2411]: E0213 19:03:11.852140 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.852930 kubelet[2411]: E0213 19:03:11.852789 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.852930 kubelet[2411]: W0213 19:03:11.852811 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.852930 kubelet[2411]: E0213 19:03:11.852856 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.853680 kubelet[2411]: E0213 19:03:11.853473 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.853680 kubelet[2411]: W0213 19:03:11.853499 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.854273 kubelet[2411]: E0213 19:03:11.854054 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.854273 kubelet[2411]: W0213 19:03:11.854079 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.854514 kubelet[2411]: E0213 19:03:11.854490 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.854614 kubelet[2411]: W0213 19:03:11.854592 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.854992 kubelet[2411]: E0213 19:03:11.854970 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.855122 kubelet[2411]: W0213 19:03:11.855097 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.855589 kubelet[2411]: E0213 19:03:11.855560 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.855976 kubelet[2411]: W0213 19:03:11.855714 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.855976 kubelet[2411]: E0213 19:03:11.855756 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.855976 kubelet[2411]: E0213 19:03:11.855823 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.855976 kubelet[2411]: E0213 19:03:11.855853 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.855976 kubelet[2411]: E0213 19:03:11.855881 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.855976 kubelet[2411]: E0213 19:03:11.855911 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.866340 kubelet[2411]: E0213 19:03:11.865350 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.866340 kubelet[2411]: W0213 19:03:11.865393 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.866340 kubelet[2411]: E0213 19:03:11.865429 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.872900 kubelet[2411]: E0213 19:03:11.872853 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.874752 kubelet[2411]: W0213 19:03:11.874705 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.874963 kubelet[2411]: E0213 19:03:11.874927 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.884577 kubelet[2411]: E0213 19:03:11.884527 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.884577 kubelet[2411]: W0213 19:03:11.884564 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.884742 kubelet[2411]: E0213 19:03:11.884596 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:11.894696 kubelet[2411]: E0213 19:03:11.894660 2411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:03:11.895599 kubelet[2411]: W0213 19:03:11.895389 2411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:03:11.895599 kubelet[2411]: E0213 19:03:11.895429 2411 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:03:12.014602 containerd[1926]: time="2025-02-13T19:03:12.014486509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gn7vj,Uid:4a33011a-608e-4c49-9353-4360b36df732,Namespace:calico-system,Attempt:0,}" Feb 13 19:03:12.027671 containerd[1926]: time="2025-02-13T19:03:12.027246194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f4svj,Uid:330ee996-d399-411a-8fdf-f83d742359a5,Namespace:kube-system,Attempt:0,}" Feb 13 19:03:12.524475 containerd[1926]: time="2025-02-13T19:03:12.524397675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:03:12.526469 containerd[1926]: time="2025-02-13T19:03:12.526411588Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:03:12.527987 containerd[1926]: time="2025-02-13T19:03:12.527906592Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:03:12.529352 containerd[1926]: time="2025-02-13T19:03:12.528970579Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:03:12.529352 containerd[1926]: time="2025-02-13T19:03:12.529280325Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:03:12.532764 containerd[1926]: time="2025-02-13T19:03:12.532687064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:03:12.537664 containerd[1926]: time="2025-02-13T19:03:12.537359027Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 509.99967ms" Feb 13 19:03:12.540095 containerd[1926]: time="2025-02-13T19:03:12.540024886Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 525.380008ms" Feb 13 19:03:12.669906 kubelet[2411]: E0213 19:03:12.669858 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:12.726551 containerd[1926]: time="2025-02-13T19:03:12.725889607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:12.726551 containerd[1926]: time="2025-02-13T19:03:12.726204138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:12.726551 containerd[1926]: time="2025-02-13T19:03:12.726342081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:12.728984 containerd[1926]: time="2025-02-13T19:03:12.728700809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:12.730282 containerd[1926]: time="2025-02-13T19:03:12.728552155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:12.730282 containerd[1926]: time="2025-02-13T19:03:12.728660005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:12.730282 containerd[1926]: time="2025-02-13T19:03:12.728696167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:12.730282 containerd[1926]: time="2025-02-13T19:03:12.728869529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:12.840482 kubelet[2411]: E0213 19:03:12.840194 2411 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l4mms" podUID="099bb73c-5027-4347-b5a3-d47745bfcefe" Feb 13 19:03:12.874342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount630232080.mount: Deactivated successfully. Feb 13 19:03:12.922569 systemd[1]: Started cri-containerd-01beb6d7da4688b5a48ea1e2025ced4c7f63ac03c77e4ea10f93182b8b871fd4.scope - libcontainer container 01beb6d7da4688b5a48ea1e2025ced4c7f63ac03c77e4ea10f93182b8b871fd4. Feb 13 19:03:12.926352 systemd[1]: Started cri-containerd-172325b329d9e819d2c0ead6624eafe41477adb1497207fbdca950ceef41beb8.scope - libcontainer container 172325b329d9e819d2c0ead6624eafe41477adb1497207fbdca950ceef41beb8. Feb 13 19:03:12.975676 containerd[1926]: time="2025-02-13T19:03:12.975578781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gn7vj,Uid:4a33011a-608e-4c49-9353-4360b36df732,Namespace:calico-system,Attempt:0,} returns sandbox id \"01beb6d7da4688b5a48ea1e2025ced4c7f63ac03c77e4ea10f93182b8b871fd4\"" Feb 13 19:03:12.984297 containerd[1926]: time="2025-02-13T19:03:12.984212590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:03:12.997469 containerd[1926]: time="2025-02-13T19:03:12.997409036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f4svj,Uid:330ee996-d399-411a-8fdf-f83d742359a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"172325b329d9e819d2c0ead6624eafe41477adb1497207fbdca950ceef41beb8\"" Feb 13 19:03:13.671020 kubelet[2411]: E0213 19:03:13.670963 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:14.267796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3211162520.mount: Deactivated successfully. Feb 13 19:03:14.392266 containerd[1926]: time="2025-02-13T19:03:14.391780242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:14.393175 containerd[1926]: time="2025-02-13T19:03:14.393109549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Feb 13 19:03:14.394171 containerd[1926]: time="2025-02-13T19:03:14.394088583Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:14.398765 containerd[1926]: time="2025-02-13T19:03:14.398696570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:14.400520 containerd[1926]: time="2025-02-13T19:03:14.400205368Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.415905341s" Feb 13 19:03:14.400520 containerd[1926]: time="2025-02-13T19:03:14.400293188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 19:03:14.403328 containerd[1926]: time="2025-02-13T19:03:14.403179904Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:03:14.405660 containerd[1926]: time="2025-02-13T19:03:14.405591561Z" level=info msg="CreateContainer within sandbox \"01beb6d7da4688b5a48ea1e2025ced4c7f63ac03c77e4ea10f93182b8b871fd4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:03:14.427856 containerd[1926]: time="2025-02-13T19:03:14.427633775Z" level=info msg="CreateContainer within sandbox \"01beb6d7da4688b5a48ea1e2025ced4c7f63ac03c77e4ea10f93182b8b871fd4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d4c4d14ade045e6c4ea610b01f5d90caddd387273dd46046df1adace8e58d7ae\"" Feb 13 19:03:14.428666 containerd[1926]: time="2025-02-13T19:03:14.428607376Z" level=info msg="StartContainer for \"d4c4d14ade045e6c4ea610b01f5d90caddd387273dd46046df1adace8e58d7ae\"" Feb 13 19:03:14.481529 systemd[1]: Started cri-containerd-d4c4d14ade045e6c4ea610b01f5d90caddd387273dd46046df1adace8e58d7ae.scope - libcontainer container d4c4d14ade045e6c4ea610b01f5d90caddd387273dd46046df1adace8e58d7ae. Feb 13 19:03:14.534887 containerd[1926]: time="2025-02-13T19:03:14.534707955Z" level=info msg="StartContainer for \"d4c4d14ade045e6c4ea610b01f5d90caddd387273dd46046df1adace8e58d7ae\" returns successfully" Feb 13 19:03:14.559434 systemd[1]: cri-containerd-d4c4d14ade045e6c4ea610b01f5d90caddd387273dd46046df1adace8e58d7ae.scope: Deactivated successfully. Feb 13 19:03:14.631521 containerd[1926]: time="2025-02-13T19:03:14.631392645Z" level=info msg="shim disconnected" id=d4c4d14ade045e6c4ea610b01f5d90caddd387273dd46046df1adace8e58d7ae namespace=k8s.io Feb 13 19:03:14.632036 containerd[1926]: time="2025-02-13T19:03:14.631495289Z" level=warning msg="cleaning up after shim disconnected" id=d4c4d14ade045e6c4ea610b01f5d90caddd387273dd46046df1adace8e58d7ae namespace=k8s.io Feb 13 19:03:14.632036 containerd[1926]: time="2025-02-13T19:03:14.631806618Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:14.671180 kubelet[2411]: E0213 19:03:14.671109 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:14.821439 kubelet[2411]: E0213 19:03:14.819882 2411 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l4mms" podUID="099bb73c-5027-4347-b5a3-d47745bfcefe" Feb 13 19:03:15.215924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4c4d14ade045e6c4ea610b01f5d90caddd387273dd46046df1adace8e58d7ae-rootfs.mount: Deactivated successfully. Feb 13 19:03:15.672362 kubelet[2411]: E0213 19:03:15.672069 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:15.685093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1870367069.mount: Deactivated successfully. Feb 13 19:03:16.173268 containerd[1926]: time="2025-02-13T19:03:16.172974045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:16.174707 containerd[1926]: time="2025-02-13T19:03:16.174422405Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769256" Feb 13 19:03:16.175618 containerd[1926]: time="2025-02-13T19:03:16.175548953Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:16.179368 containerd[1926]: time="2025-02-13T19:03:16.179190367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:16.181623 containerd[1926]: time="2025-02-13T19:03:16.181441353Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.778175393s" Feb 13 19:03:16.181623 containerd[1926]: time="2025-02-13T19:03:16.181490637Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 19:03:16.184790 containerd[1926]: time="2025-02-13T19:03:16.184384789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:03:16.186281 containerd[1926]: time="2025-02-13T19:03:16.185906600Z" level=info msg="CreateContainer within sandbox \"172325b329d9e819d2c0ead6624eafe41477adb1497207fbdca950ceef41beb8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:03:16.219369 containerd[1926]: time="2025-02-13T19:03:16.219297404Z" level=info msg="CreateContainer within sandbox \"172325b329d9e819d2c0ead6624eafe41477adb1497207fbdca950ceef41beb8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"003c3004eaa2a65ab35bb805423868f10f1e5a4110350ed90629449cb5759c38\"" Feb 13 19:03:16.220634 containerd[1926]: time="2025-02-13T19:03:16.220497236Z" level=info msg="StartContainer for \"003c3004eaa2a65ab35bb805423868f10f1e5a4110350ed90629449cb5759c38\"" Feb 13 19:03:16.270784 systemd[1]: run-containerd-runc-k8s.io-003c3004eaa2a65ab35bb805423868f10f1e5a4110350ed90629449cb5759c38-runc.lS5HlE.mount: Deactivated successfully. Feb 13 19:03:16.284536 systemd[1]: Started cri-containerd-003c3004eaa2a65ab35bb805423868f10f1e5a4110350ed90629449cb5759c38.scope - libcontainer container 003c3004eaa2a65ab35bb805423868f10f1e5a4110350ed90629449cb5759c38. Feb 13 19:03:16.341286 containerd[1926]: time="2025-02-13T19:03:16.341185530Z" level=info msg="StartContainer for \"003c3004eaa2a65ab35bb805423868f10f1e5a4110350ed90629449cb5759c38\" returns successfully" Feb 13 19:03:16.673162 kubelet[2411]: E0213 19:03:16.672977 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:16.819692 kubelet[2411]: E0213 19:03:16.819591 2411 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l4mms" podUID="099bb73c-5027-4347-b5a3-d47745bfcefe" Feb 13 19:03:17.673889 kubelet[2411]: E0213 19:03:17.673836 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:18.675743 kubelet[2411]: E0213 19:03:18.675684 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:18.818411 kubelet[2411]: E0213 19:03:18.817644 2411 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l4mms" podUID="099bb73c-5027-4347-b5a3-d47745bfcefe" Feb 13 19:03:19.676180 kubelet[2411]: E0213 19:03:19.676054 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:19.698166 containerd[1926]: time="2025-02-13T19:03:19.696772323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:19.699447 containerd[1926]: time="2025-02-13T19:03:19.699372334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 19:03:19.700770 containerd[1926]: time="2025-02-13T19:03:19.700712712Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:19.704479 containerd[1926]: time="2025-02-13T19:03:19.704427050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:19.706021 containerd[1926]: time="2025-02-13T19:03:19.705981484Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.521539412s" Feb 13 19:03:19.706177 containerd[1926]: time="2025-02-13T19:03:19.706126600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 19:03:19.714740 containerd[1926]: time="2025-02-13T19:03:19.714681452Z" level=info msg="CreateContainer within sandbox \"01beb6d7da4688b5a48ea1e2025ced4c7f63ac03c77e4ea10f93182b8b871fd4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:03:19.739605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1752182602.mount: Deactivated successfully. Feb 13 19:03:19.744952 containerd[1926]: time="2025-02-13T19:03:19.744898980Z" level=info msg="CreateContainer within sandbox \"01beb6d7da4688b5a48ea1e2025ced4c7f63ac03c77e4ea10f93182b8b871fd4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"72f705800c68dca10e9d666258f8286b5dac45e8f38424c9e45ca45a0bd461ff\"" Feb 13 19:03:19.747255 containerd[1926]: time="2025-02-13T19:03:19.746041049Z" level=info msg="StartContainer for \"72f705800c68dca10e9d666258f8286b5dac45e8f38424c9e45ca45a0bd461ff\"" Feb 13 19:03:19.804550 systemd[1]: Started cri-containerd-72f705800c68dca10e9d666258f8286b5dac45e8f38424c9e45ca45a0bd461ff.scope - libcontainer container 72f705800c68dca10e9d666258f8286b5dac45e8f38424c9e45ca45a0bd461ff. Feb 13 19:03:19.864076 containerd[1926]: time="2025-02-13T19:03:19.863993019Z" level=info msg="StartContainer for \"72f705800c68dca10e9d666258f8286b5dac45e8f38424c9e45ca45a0bd461ff\" returns successfully" Feb 13 19:03:19.908847 kubelet[2411]: I0213 19:03:19.908749 2411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f4svj" podStartSLOduration=6.725659786 podStartE2EDuration="9.90872637s" podCreationTimestamp="2025-02-13 19:03:10 +0000 UTC" firstStartedPulling="2025-02-13 19:03:13.000057264 +0000 UTC m=+3.567399000" lastFinishedPulling="2025-02-13 19:03:16.18312386 +0000 UTC m=+6.750465584" observedRunningTime="2025-02-13 19:03:16.889238732 +0000 UTC m=+7.456580468" watchObservedRunningTime="2025-02-13 19:03:19.90872637 +0000 UTC m=+10.476068106" Feb 13 19:03:20.676376 kubelet[2411]: E0213 19:03:20.676324 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:20.817832 kubelet[2411]: E0213 19:03:20.817156 2411 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l4mms" podUID="099bb73c-5027-4347-b5a3-d47745bfcefe" Feb 13 19:03:21.181929 containerd[1926]: time="2025-02-13T19:03:21.181693468Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:03:21.186401 systemd[1]: cri-containerd-72f705800c68dca10e9d666258f8286b5dac45e8f38424c9e45ca45a0bd461ff.scope: Deactivated successfully. Feb 13 19:03:21.223936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72f705800c68dca10e9d666258f8286b5dac45e8f38424c9e45ca45a0bd461ff-rootfs.mount: Deactivated successfully. Feb 13 19:03:21.283540 kubelet[2411]: I0213 19:03:21.283475 2411 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:03:21.677643 kubelet[2411]: E0213 19:03:21.677476 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:22.411719 containerd[1926]: time="2025-02-13T19:03:22.411593098Z" level=info msg="shim disconnected" id=72f705800c68dca10e9d666258f8286b5dac45e8f38424c9e45ca45a0bd461ff namespace=k8s.io Feb 13 19:03:22.411719 containerd[1926]: time="2025-02-13T19:03:22.411669392Z" level=warning msg="cleaning up after shim disconnected" id=72f705800c68dca10e9d666258f8286b5dac45e8f38424c9e45ca45a0bd461ff namespace=k8s.io Feb 13 19:03:22.411719 containerd[1926]: time="2025-02-13T19:03:22.411692876Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:22.677013 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:03:22.678494 kubelet[2411]: E0213 19:03:22.678411 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:22.827678 systemd[1]: Created slice kubepods-besteffort-pod099bb73c_5027_4347_b5a3_d47745bfcefe.slice - libcontainer container kubepods-besteffort-pod099bb73c_5027_4347_b5a3_d47745bfcefe.slice. Feb 13 19:03:22.832336 containerd[1926]: time="2025-02-13T19:03:22.832160358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l4mms,Uid:099bb73c-5027-4347-b5a3-d47745bfcefe,Namespace:calico-system,Attempt:0,}" Feb 13 19:03:22.898922 containerd[1926]: time="2025-02-13T19:03:22.898583586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:03:22.958361 containerd[1926]: time="2025-02-13T19:03:22.958179098Z" level=error msg="Failed to destroy network for sandbox \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:22.961738 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d-shm.mount: Deactivated successfully. Feb 13 19:03:22.963584 containerd[1926]: time="2025-02-13T19:03:22.963025561Z" level=error msg="encountered an error cleaning up failed sandbox \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:22.963584 containerd[1926]: time="2025-02-13T19:03:22.963132740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l4mms,Uid:099bb73c-5027-4347-b5a3-d47745bfcefe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:22.963772 kubelet[2411]: E0213 19:03:22.963481 2411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:22.963772 kubelet[2411]: E0213 19:03:22.963567 2411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l4mms" Feb 13 19:03:22.963772 kubelet[2411]: E0213 19:03:22.963604 2411 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l4mms" Feb 13 19:03:22.964099 kubelet[2411]: E0213 19:03:22.963694 2411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l4mms_calico-system(099bb73c-5027-4347-b5a3-d47745bfcefe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l4mms_calico-system(099bb73c-5027-4347-b5a3-d47745bfcefe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l4mms" podUID="099bb73c-5027-4347-b5a3-d47745bfcefe" Feb 13 19:03:23.678798 kubelet[2411]: E0213 19:03:23.678738 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:23.682144 systemd[1]: Created slice kubepods-besteffort-pod68ad26cf_7232_424c_bc12_40ff2a082a9f.slice - libcontainer container kubepods-besteffort-pod68ad26cf_7232_424c_bc12_40ff2a082a9f.slice. Feb 13 19:03:23.808085 kubelet[2411]: I0213 19:03:23.807995 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb7vg\" (UniqueName: \"kubernetes.io/projected/68ad26cf-7232-424c-bc12-40ff2a082a9f-kube-api-access-xb7vg\") pod \"nginx-deployment-8587fbcb89-rwx5z\" (UID: \"68ad26cf-7232-424c-bc12-40ff2a082a9f\") " pod="default/nginx-deployment-8587fbcb89-rwx5z" Feb 13 19:03:23.896654 kubelet[2411]: I0213 19:03:23.896611 2411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d" Feb 13 19:03:23.897685 containerd[1926]: time="2025-02-13T19:03:23.897617858Z" level=info msg="StopPodSandbox for \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\"" Feb 13 19:03:23.901589 containerd[1926]: time="2025-02-13T19:03:23.897938650Z" level=info msg="Ensure that sandbox 985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d in task-service has been cleanup successfully" Feb 13 19:03:23.901589 containerd[1926]: time="2025-02-13T19:03:23.898306578Z" level=info msg="TearDown network for sandbox \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\" successfully" Feb 13 19:03:23.901589 containerd[1926]: time="2025-02-13T19:03:23.898335519Z" level=info msg="StopPodSandbox for \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\" returns successfully" Feb 13 19:03:23.901589 containerd[1926]: time="2025-02-13T19:03:23.900996820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l4mms,Uid:099bb73c-5027-4347-b5a3-d47745bfcefe,Namespace:calico-system,Attempt:1,}" Feb 13 19:03:23.900632 systemd[1]: run-netns-cni\x2d9703ff46\x2d5d58\x2d6641\x2d5b23\x2de3012911c65a.mount: Deactivated successfully. Feb 13 19:03:23.988503 containerd[1926]: time="2025-02-13T19:03:23.988003635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-rwx5z,Uid:68ad26cf-7232-424c-bc12-40ff2a082a9f,Namespace:default,Attempt:0,}" Feb 13 19:03:24.030701 containerd[1926]: time="2025-02-13T19:03:24.030616698Z" level=error msg="Failed to destroy network for sandbox \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:24.034051 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021-shm.mount: Deactivated successfully. Feb 13 19:03:24.041384 containerd[1926]: time="2025-02-13T19:03:24.039830333Z" level=error msg="encountered an error cleaning up failed sandbox \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:24.041384 containerd[1926]: time="2025-02-13T19:03:24.041061265Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l4mms,Uid:099bb73c-5027-4347-b5a3-d47745bfcefe,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:24.042785 kubelet[2411]: E0213 19:03:24.041857 2411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:24.042785 kubelet[2411]: E0213 19:03:24.041934 2411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l4mms" Feb 13 19:03:24.042785 kubelet[2411]: E0213 19:03:24.041969 2411 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l4mms" Feb 13 19:03:24.043004 kubelet[2411]: E0213 19:03:24.042033 2411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l4mms_calico-system(099bb73c-5027-4347-b5a3-d47745bfcefe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l4mms_calico-system(099bb73c-5027-4347-b5a3-d47745bfcefe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l4mms" podUID="099bb73c-5027-4347-b5a3-d47745bfcefe" Feb 13 19:03:24.130457 containerd[1926]: time="2025-02-13T19:03:24.130375078Z" level=error msg="Failed to destroy network for sandbox \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:24.131534 containerd[1926]: time="2025-02-13T19:03:24.131335533Z" level=error msg="encountered an error cleaning up failed sandbox \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:24.131736 containerd[1926]: time="2025-02-13T19:03:24.131665776Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-rwx5z,Uid:68ad26cf-7232-424c-bc12-40ff2a082a9f,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:24.132430 kubelet[2411]: E0213 19:03:24.132203 2411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:24.132430 kubelet[2411]: E0213 19:03:24.132311 2411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-rwx5z" Feb 13 19:03:24.132430 kubelet[2411]: E0213 19:03:24.132344 2411 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-rwx5z" Feb 13 19:03:24.132692 kubelet[2411]: E0213 19:03:24.132417 2411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-rwx5z_default(68ad26cf-7232-424c-bc12-40ff2a082a9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-rwx5z_default(68ad26cf-7232-424c-bc12-40ff2a082a9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-rwx5z" podUID="68ad26cf-7232-424c-bc12-40ff2a082a9f" Feb 13 19:03:24.679146 kubelet[2411]: E0213 19:03:24.679029 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:24.902118 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8-shm.mount: Deactivated successfully. Feb 13 19:03:24.909889 kubelet[2411]: I0213 19:03:24.908476 2411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8" Feb 13 19:03:24.910045 containerd[1926]: time="2025-02-13T19:03:24.909571323Z" level=info msg="StopPodSandbox for \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\"" Feb 13 19:03:24.910045 containerd[1926]: time="2025-02-13T19:03:24.909898747Z" level=info msg="Ensure that sandbox e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8 in task-service has been cleanup successfully" Feb 13 19:03:24.913551 systemd[1]: run-netns-cni\x2d023f1a5a\x2d0b3e\x2d1363\x2deb8b\x2db955d9837f5b.mount: Deactivated successfully. Feb 13 19:03:24.918291 containerd[1926]: time="2025-02-13T19:03:24.918194038Z" level=info msg="TearDown network for sandbox \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\" successfully" Feb 13 19:03:24.918773 containerd[1926]: time="2025-02-13T19:03:24.918573828Z" level=info msg="StopPodSandbox for \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\" returns successfully" Feb 13 19:03:24.919388 kubelet[2411]: I0213 19:03:24.919343 2411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021" Feb 13 19:03:24.920781 containerd[1926]: time="2025-02-13T19:03:24.919548352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-rwx5z,Uid:68ad26cf-7232-424c-bc12-40ff2a082a9f,Namespace:default,Attempt:1,}" Feb 13 19:03:24.920925 containerd[1926]: time="2025-02-13T19:03:24.920885383Z" level=info msg="StopPodSandbox for \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\"" Feb 13 19:03:24.923289 containerd[1926]: time="2025-02-13T19:03:24.921364784Z" level=info msg="Ensure that sandbox c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021 in task-service has been cleanup successfully" Feb 13 19:03:24.924450 containerd[1926]: time="2025-02-13T19:03:24.924279673Z" level=info msg="TearDown network for sandbox \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\" successfully" Feb 13 19:03:24.924450 containerd[1926]: time="2025-02-13T19:03:24.924440621Z" level=info msg="StopPodSandbox for \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\" returns successfully" Feb 13 19:03:24.925289 containerd[1926]: time="2025-02-13T19:03:24.925247985Z" level=info msg="StopPodSandbox for \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\"" Feb 13 19:03:24.926883 containerd[1926]: time="2025-02-13T19:03:24.925725886Z" level=info msg="TearDown network for sandbox \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\" successfully" Feb 13 19:03:24.926568 systemd[1]: run-netns-cni\x2ddbb8a5e3\x2d564d\x2dceb2\x2d4588\x2d6ece5e76cf15.mount: Deactivated successfully. Feb 13 19:03:24.927488 containerd[1926]: time="2025-02-13T19:03:24.927353184Z" level=info msg="StopPodSandbox for \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\" returns successfully" Feb 13 19:03:24.930289 containerd[1926]: time="2025-02-13T19:03:24.929988745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l4mms,Uid:099bb73c-5027-4347-b5a3-d47745bfcefe,Namespace:calico-system,Attempt:2,}" Feb 13 19:03:25.170195 containerd[1926]: time="2025-02-13T19:03:25.170108890Z" level=error msg="Failed to destroy network for sandbox \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:25.171737 containerd[1926]: time="2025-02-13T19:03:25.171648284Z" level=error msg="encountered an error cleaning up failed sandbox \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:25.171737 containerd[1926]: time="2025-02-13T19:03:25.171759264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-rwx5z,Uid:68ad26cf-7232-424c-bc12-40ff2a082a9f,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:25.172616 kubelet[2411]: E0213 19:03:25.172364 2411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:25.172616 kubelet[2411]: E0213 19:03:25.172570 2411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-rwx5z" Feb 13 19:03:25.173110 kubelet[2411]: E0213 19:03:25.172887 2411 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-rwx5z" Feb 13 19:03:25.173110 kubelet[2411]: E0213 19:03:25.172995 2411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-rwx5z_default(68ad26cf-7232-424c-bc12-40ff2a082a9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-rwx5z_default(68ad26cf-7232-424c-bc12-40ff2a082a9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-rwx5z" podUID="68ad26cf-7232-424c-bc12-40ff2a082a9f" Feb 13 19:03:25.221307 containerd[1926]: time="2025-02-13T19:03:25.220333945Z" level=error msg="Failed to destroy network for sandbox \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:25.221961 containerd[1926]: time="2025-02-13T19:03:25.221904811Z" level=error msg="encountered an error cleaning up failed sandbox \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:25.222206 containerd[1926]: time="2025-02-13T19:03:25.222168260Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l4mms,Uid:099bb73c-5027-4347-b5a3-d47745bfcefe,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:25.223049 kubelet[2411]: E0213 19:03:25.222990 2411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:25.223193 kubelet[2411]: E0213 19:03:25.223075 2411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l4mms" Feb 13 19:03:25.223193 kubelet[2411]: E0213 19:03:25.223108 2411 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l4mms" Feb 13 19:03:25.223415 kubelet[2411]: E0213 19:03:25.223174 2411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l4mms_calico-system(099bb73c-5027-4347-b5a3-d47745bfcefe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l4mms_calico-system(099bb73c-5027-4347-b5a3-d47745bfcefe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l4mms" podUID="099bb73c-5027-4347-b5a3-d47745bfcefe" Feb 13 19:03:25.679867 kubelet[2411]: E0213 19:03:25.679724 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:25.903261 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038-shm.mount: Deactivated successfully. Feb 13 19:03:25.929070 kubelet[2411]: I0213 19:03:25.929023 2411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7" Feb 13 19:03:25.930505 containerd[1926]: time="2025-02-13T19:03:25.930378663Z" level=info msg="StopPodSandbox for \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\"" Feb 13 19:03:25.932824 containerd[1926]: time="2025-02-13T19:03:25.932079341Z" level=info msg="Ensure that sandbox 4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7 in task-service has been cleanup successfully" Feb 13 19:03:25.936440 containerd[1926]: time="2025-02-13T19:03:25.936385834Z" level=info msg="TearDown network for sandbox \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\" successfully" Feb 13 19:03:25.937610 systemd[1]: run-netns-cni\x2d3cee0138\x2d7f73\x2df2e3\x2ddaac\x2d052de3dff00f.mount: Deactivated successfully. Feb 13 19:03:25.937991 containerd[1926]: time="2025-02-13T19:03:25.937851309Z" level=info msg="StopPodSandbox for \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\" returns successfully" Feb 13 19:03:25.939707 containerd[1926]: time="2025-02-13T19:03:25.939659057Z" level=info msg="StopPodSandbox for \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\"" Feb 13 19:03:25.940586 containerd[1926]: time="2025-02-13T19:03:25.940349515Z" level=info msg="TearDown network for sandbox \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\" successfully" Feb 13 19:03:25.940586 containerd[1926]: time="2025-02-13T19:03:25.940383566Z" level=info msg="StopPodSandbox for \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\" returns successfully" Feb 13 19:03:25.941994 containerd[1926]: time="2025-02-13T19:03:25.941742834Z" level=info msg="StopPodSandbox for \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\"" Feb 13 19:03:25.941994 containerd[1926]: time="2025-02-13T19:03:25.941902415Z" level=info msg="TearDown network for sandbox \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\" successfully" Feb 13 19:03:25.941994 containerd[1926]: time="2025-02-13T19:03:25.941926439Z" level=info msg="StopPodSandbox for \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\" returns successfully" Feb 13 19:03:25.942285 kubelet[2411]: I0213 19:03:25.942002 2411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038" Feb 13 19:03:25.944350 containerd[1926]: time="2025-02-13T19:03:25.944114840Z" level=info msg="StopPodSandbox for \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\"" Feb 13 19:03:25.945033 containerd[1926]: time="2025-02-13T19:03:25.944148495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l4mms,Uid:099bb73c-5027-4347-b5a3-d47745bfcefe,Namespace:calico-system,Attempt:3,}" Feb 13 19:03:25.945033 containerd[1926]: time="2025-02-13T19:03:25.944799685Z" level=info msg="Ensure that sandbox 02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038 in task-service has been cleanup successfully" Feb 13 19:03:25.947393 containerd[1926]: time="2025-02-13T19:03:25.947313568Z" level=info msg="TearDown network for sandbox \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\" successfully" Feb 13 19:03:25.947914 containerd[1926]: time="2025-02-13T19:03:25.947575625Z" level=info msg="StopPodSandbox for \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\" returns successfully" Feb 13 19:03:25.949667 systemd[1]: run-netns-cni\x2d289a0f6d\x2dffbb\x2dacae\x2d4d7c\x2dd9aa0511c21e.mount: Deactivated successfully. Feb 13 19:03:25.951314 containerd[1926]: time="2025-02-13T19:03:25.951213165Z" level=info msg="StopPodSandbox for \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\"" Feb 13 19:03:25.951762 containerd[1926]: time="2025-02-13T19:03:25.951584703Z" level=info msg="TearDown network for sandbox \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\" successfully" Feb 13 19:03:25.951762 containerd[1926]: time="2025-02-13T19:03:25.951631216Z" level=info msg="StopPodSandbox for \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\" returns successfully" Feb 13 19:03:25.952681 containerd[1926]: time="2025-02-13T19:03:25.952639156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-rwx5z,Uid:68ad26cf-7232-424c-bc12-40ff2a082a9f,Namespace:default,Attempt:2,}" Feb 13 19:03:26.182059 containerd[1926]: time="2025-02-13T19:03:26.181910559Z" level=error msg="Failed to destroy network for sandbox \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:26.184603 containerd[1926]: time="2025-02-13T19:03:26.184365647Z" level=error msg="encountered an error cleaning up failed sandbox \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:26.184603 containerd[1926]: time="2025-02-13T19:03:26.184480885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l4mms,Uid:099bb73c-5027-4347-b5a3-d47745bfcefe,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:26.184603 containerd[1926]: time="2025-02-13T19:03:26.185108208Z" level=error msg="Failed to destroy network for sandbox \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:26.185840 kubelet[2411]: E0213 19:03:26.184763 2411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:26.185840 kubelet[2411]: E0213 19:03:26.184838 2411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l4mms" Feb 13 19:03:26.185840 kubelet[2411]: E0213 19:03:26.184870 2411 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l4mms" Feb 13 19:03:26.186007 kubelet[2411]: E0213 19:03:26.184927 2411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l4mms_calico-system(099bb73c-5027-4347-b5a3-d47745bfcefe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l4mms_calico-system(099bb73c-5027-4347-b5a3-d47745bfcefe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l4mms" podUID="099bb73c-5027-4347-b5a3-d47745bfcefe" Feb 13 19:03:26.186124 containerd[1926]: time="2025-02-13T19:03:26.185998018Z" level=error msg="encountered an error cleaning up failed sandbox \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:26.186190 containerd[1926]: time="2025-02-13T19:03:26.186144057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-rwx5z,Uid:68ad26cf-7232-424c-bc12-40ff2a082a9f,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:26.186724 kubelet[2411]: E0213 19:03:26.186671 2411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:26.186843 kubelet[2411]: E0213 19:03:26.186752 2411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-rwx5z" Feb 13 19:03:26.186843 kubelet[2411]: E0213 19:03:26.186785 2411 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-rwx5z" Feb 13 19:03:26.186982 kubelet[2411]: E0213 19:03:26.186863 2411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-rwx5z_default(68ad26cf-7232-424c-bc12-40ff2a082a9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-rwx5z_default(68ad26cf-7232-424c-bc12-40ff2a082a9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-rwx5z" podUID="68ad26cf-7232-424c-bc12-40ff2a082a9f" Feb 13 19:03:26.680780 kubelet[2411]: E0213 19:03:26.680505 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:26.901355 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928-shm.mount: Deactivated successfully. Feb 13 19:03:26.901543 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf-shm.mount: Deactivated successfully. Feb 13 19:03:26.951025 kubelet[2411]: I0213 19:03:26.950899 2411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf" Feb 13 19:03:26.954914 containerd[1926]: time="2025-02-13T19:03:26.953800899Z" level=info msg="StopPodSandbox for \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\"" Feb 13 19:03:26.954914 containerd[1926]: time="2025-02-13T19:03:26.954110884Z" level=info msg="Ensure that sandbox f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf in task-service has been cleanup successfully" Feb 13 19:03:26.955539 containerd[1926]: time="2025-02-13T19:03:26.955158608Z" level=info msg="TearDown network for sandbox \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\" successfully" Feb 13 19:03:26.955539 containerd[1926]: time="2025-02-13T19:03:26.955192251Z" level=info msg="StopPodSandbox for \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\" returns successfully" Feb 13 19:03:26.959858 containerd[1926]: time="2025-02-13T19:03:26.957940976Z" level=info msg="StopPodSandbox for \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\"" Feb 13 19:03:26.959858 containerd[1926]: time="2025-02-13T19:03:26.958097438Z" level=info msg="TearDown network for sandbox \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\" successfully" Feb 13 19:03:26.959858 containerd[1926]: time="2025-02-13T19:03:26.958118559Z" level=info msg="StopPodSandbox for \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\" returns successfully" Feb 13 19:03:26.959095 systemd[1]: run-netns-cni\x2d09ef83b1\x2da3ed\x2db7d4\x2d2f14\x2dcd21f962603b.mount: Deactivated successfully. Feb 13 19:03:26.962681 containerd[1926]: time="2025-02-13T19:03:26.959904178Z" level=info msg="StopPodSandbox for \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\"" Feb 13 19:03:26.962681 containerd[1926]: time="2025-02-13T19:03:26.960142079Z" level=info msg="TearDown network for sandbox \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\" successfully" Feb 13 19:03:26.962681 containerd[1926]: time="2025-02-13T19:03:26.960171848Z" level=info msg="StopPodSandbox for \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\" returns successfully" Feb 13 19:03:26.962681 containerd[1926]: time="2025-02-13T19:03:26.961850348Z" level=info msg="StopPodSandbox for \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\"" Feb 13 19:03:26.962681 containerd[1926]: time="2025-02-13T19:03:26.962007362Z" level=info msg="TearDown network for sandbox \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\" successfully" Feb 13 19:03:26.962681 containerd[1926]: time="2025-02-13T19:03:26.962029551Z" level=info msg="StopPodSandbox for \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\" returns successfully" Feb 13 19:03:26.964561 kubelet[2411]: I0213 19:03:26.963632 2411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928" Feb 13 19:03:26.965122 containerd[1926]: time="2025-02-13T19:03:26.964928848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l4mms,Uid:099bb73c-5027-4347-b5a3-d47745bfcefe,Namespace:calico-system,Attempt:4,}" Feb 13 19:03:26.966178 containerd[1926]: time="2025-02-13T19:03:26.965605502Z" level=info msg="StopPodSandbox for \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\"" Feb 13 19:03:26.966756 containerd[1926]: time="2025-02-13T19:03:26.966711421Z" level=info msg="Ensure that sandbox c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928 in task-service has been cleanup successfully" Feb 13 19:03:26.970298 containerd[1926]: time="2025-02-13T19:03:26.969920272Z" level=info msg="TearDown network for sandbox \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\" successfully" Feb 13 19:03:26.970298 containerd[1926]: time="2025-02-13T19:03:26.969997189Z" level=info msg="StopPodSandbox for \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\" returns successfully" Feb 13 19:03:26.973368 containerd[1926]: time="2025-02-13T19:03:26.971313770Z" level=info msg="StopPodSandbox for \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\"" Feb 13 19:03:26.973368 containerd[1926]: time="2025-02-13T19:03:26.971862832Z" level=info msg="TearDown network for sandbox \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\" successfully" Feb 13 19:03:26.973368 containerd[1926]: time="2025-02-13T19:03:26.971937710Z" level=info msg="StopPodSandbox for \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\" returns successfully" Feb 13 19:03:26.970934 systemd[1]: run-netns-cni\x2dcb1bb9d1\x2dcc40\x2d497d\x2dabdd\x2d51f30ff784bf.mount: Deactivated successfully. Feb 13 19:03:26.975253 containerd[1926]: time="2025-02-13T19:03:26.975148396Z" level=info msg="StopPodSandbox for \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\"" Feb 13 19:03:26.976587 containerd[1926]: time="2025-02-13T19:03:26.976393577Z" level=info msg="TearDown network for sandbox \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\" successfully" Feb 13 19:03:26.976587 containerd[1926]: time="2025-02-13T19:03:26.976435256Z" level=info msg="StopPodSandbox for \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\" returns successfully" Feb 13 19:03:26.978368 containerd[1926]: time="2025-02-13T19:03:26.978189967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-rwx5z,Uid:68ad26cf-7232-424c-bc12-40ff2a082a9f,Namespace:default,Attempt:3,}" Feb 13 19:03:27.160321 containerd[1926]: time="2025-02-13T19:03:27.159941934Z" level=error msg="Failed to destroy network for sandbox \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:27.161680 containerd[1926]: time="2025-02-13T19:03:27.161000644Z" level=error msg="encountered an error cleaning up failed sandbox \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:27.161680 containerd[1926]: time="2025-02-13T19:03:27.161107463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l4mms,Uid:099bb73c-5027-4347-b5a3-d47745bfcefe,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:27.161885 kubelet[2411]: E0213 19:03:27.161827 2411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:27.161959 kubelet[2411]: E0213 19:03:27.161903 2411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l4mms" Feb 13 19:03:27.161959 kubelet[2411]: E0213 19:03:27.161937 2411 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l4mms" Feb 13 19:03:27.162085 kubelet[2411]: E0213 19:03:27.161995 2411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l4mms_calico-system(099bb73c-5027-4347-b5a3-d47745bfcefe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l4mms_calico-system(099bb73c-5027-4347-b5a3-d47745bfcefe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l4mms" podUID="099bb73c-5027-4347-b5a3-d47745bfcefe" Feb 13 19:03:27.204990 containerd[1926]: time="2025-02-13T19:03:27.204822942Z" level=error msg="Failed to destroy network for sandbox \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:27.206444 containerd[1926]: time="2025-02-13T19:03:27.206332291Z" level=error msg="encountered an error cleaning up failed sandbox \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:27.206843 containerd[1926]: time="2025-02-13T19:03:27.206801412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-rwx5z,Uid:68ad26cf-7232-424c-bc12-40ff2a082a9f,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:27.208051 kubelet[2411]: E0213 19:03:27.207992 2411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:27.208208 kubelet[2411]: E0213 19:03:27.208079 2411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-rwx5z" Feb 13 19:03:27.208208 kubelet[2411]: E0213 19:03:27.208113 2411 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-rwx5z" Feb 13 19:03:27.208208 kubelet[2411]: E0213 19:03:27.208173 2411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-rwx5z_default(68ad26cf-7232-424c-bc12-40ff2a082a9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-rwx5z_default(68ad26cf-7232-424c-bc12-40ff2a082a9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-rwx5z" podUID="68ad26cf-7232-424c-bc12-40ff2a082a9f" Feb 13 19:03:27.680894 kubelet[2411]: E0213 19:03:27.680668 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:27.903999 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf-shm.mount: Deactivated successfully. Feb 13 19:03:27.904198 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565-shm.mount: Deactivated successfully. Feb 13 19:03:27.976420 kubelet[2411]: I0213 19:03:27.976282 2411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565" Feb 13 19:03:27.978722 containerd[1926]: time="2025-02-13T19:03:27.978128850Z" level=info msg="StopPodSandbox for \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\"" Feb 13 19:03:27.980177 containerd[1926]: time="2025-02-13T19:03:27.979717971Z" level=info msg="Ensure that sandbox b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565 in task-service has been cleanup successfully" Feb 13 19:03:27.980177 containerd[1926]: time="2025-02-13T19:03:27.980103926Z" level=info msg="TearDown network for sandbox \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\" successfully" Feb 13 19:03:27.980465 containerd[1926]: time="2025-02-13T19:03:27.980132736Z" level=info msg="StopPodSandbox for \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\" returns successfully" Feb 13 19:03:27.983047 containerd[1926]: time="2025-02-13T19:03:27.982740339Z" level=info msg="StopPodSandbox for \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\"" Feb 13 19:03:27.983047 containerd[1926]: time="2025-02-13T19:03:27.982943865Z" level=info msg="TearDown network for sandbox \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\" successfully" Feb 13 19:03:27.983047 containerd[1926]: time="2025-02-13T19:03:27.982966138Z" level=info msg="StopPodSandbox for \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\" returns successfully" Feb 13 19:03:27.986625 containerd[1926]: time="2025-02-13T19:03:27.985413946Z" level=info msg="StopPodSandbox for \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\"" Feb 13 19:03:27.986625 containerd[1926]: time="2025-02-13T19:03:27.985570060Z" level=info msg="TearDown network for sandbox \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\" successfully" Feb 13 19:03:27.986625 containerd[1926]: time="2025-02-13T19:03:27.985593976Z" level=info msg="StopPodSandbox for \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\" returns successfully" Feb 13 19:03:27.987876 systemd[1]: run-netns-cni\x2da06d7dab\x2d571d\x2d2218\x2d38b3\x2dbdead60e0f3c.mount: Deactivated successfully. Feb 13 19:03:27.990765 kubelet[2411]: I0213 19:03:27.989657 2411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf" Feb 13 19:03:27.991062 containerd[1926]: time="2025-02-13T19:03:27.987862953Z" level=info msg="StopPodSandbox for \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\"" Feb 13 19:03:27.992273 containerd[1926]: time="2025-02-13T19:03:27.991811557Z" level=info msg="StopPodSandbox for \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\"" Feb 13 19:03:27.992273 containerd[1926]: time="2025-02-13T19:03:27.992061284Z" level=info msg="Ensure that sandbox bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf in task-service has been cleanup successfully" Feb 13 19:03:27.992704 containerd[1926]: time="2025-02-13T19:03:27.992624655Z" level=info msg="TearDown network for sandbox \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\" successfully" Feb 13 19:03:27.992704 containerd[1926]: time="2025-02-13T19:03:27.992668517Z" level=info msg="StopPodSandbox for \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\" returns successfully" Feb 13 19:03:27.993542 containerd[1926]: time="2025-02-13T19:03:27.993402669Z" level=info msg="StopPodSandbox for \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\"" Feb 13 19:03:27.993611 containerd[1926]: time="2025-02-13T19:03:27.993569542Z" level=info msg="TearDown network for sandbox \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\" successfully" Feb 13 19:03:27.993611 containerd[1926]: time="2025-02-13T19:03:27.993592103Z" level=info msg="StopPodSandbox for \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\" returns successfully" Feb 13 19:03:27.994790 containerd[1926]: time="2025-02-13T19:03:27.994431023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l4mms,Uid:099bb73c-5027-4347-b5a3-d47745bfcefe,Namespace:calico-system,Attempt:5,}" Feb 13 19:03:27.995559 containerd[1926]: time="2025-02-13T19:03:27.995420372Z" level=info msg="TearDown network for sandbox \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\" successfully" Feb 13 19:03:27.995559 containerd[1926]: time="2025-02-13T19:03:27.995495442Z" level=info msg="StopPodSandbox for \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\" returns successfully" Feb 13 19:03:27.996944 containerd[1926]: time="2025-02-13T19:03:27.996657097Z" level=info msg="StopPodSandbox for \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\"" Feb 13 19:03:27.998343 containerd[1926]: time="2025-02-13T19:03:27.998025229Z" level=info msg="TearDown network for sandbox \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\" successfully" Feb 13 19:03:27.998343 containerd[1926]: time="2025-02-13T19:03:27.998195448Z" level=info msg="StopPodSandbox for \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\" returns successfully" Feb 13 19:03:28.001437 containerd[1926]: time="2025-02-13T19:03:28.001177720Z" level=info msg="StopPodSandbox for \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\"" Feb 13 19:03:28.002077 containerd[1926]: time="2025-02-13T19:03:28.001961132Z" level=info msg="TearDown network for sandbox \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\" successfully" Feb 13 19:03:28.002077 containerd[1926]: time="2025-02-13T19:03:28.002022853Z" level=info msg="StopPodSandbox for \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\" returns successfully" Feb 13 19:03:28.003605 systemd[1]: run-netns-cni\x2def444ade\x2d99dc\x2d4385\x2d9481\x2d134aa72e78a5.mount: Deactivated successfully. Feb 13 19:03:28.004004 containerd[1926]: time="2025-02-13T19:03:28.003612538Z" level=info msg="StopPodSandbox for \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\"" Feb 13 19:03:28.004381 containerd[1926]: time="2025-02-13T19:03:28.004150109Z" level=info msg="TearDown network for sandbox \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\" successfully" Feb 13 19:03:28.004381 containerd[1926]: time="2025-02-13T19:03:28.004206469Z" level=info msg="StopPodSandbox for \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\" returns successfully" Feb 13 19:03:28.006369 containerd[1926]: time="2025-02-13T19:03:28.005636897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-rwx5z,Uid:68ad26cf-7232-424c-bc12-40ff2a082a9f,Namespace:default,Attempt:4,}" Feb 13 19:03:28.245789 containerd[1926]: time="2025-02-13T19:03:28.245618091Z" level=error msg="Failed to destroy network for sandbox \"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:28.249737 containerd[1926]: time="2025-02-13T19:03:28.247774109Z" level=error msg="encountered an error cleaning up failed sandbox \"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:28.249737 containerd[1926]: time="2025-02-13T19:03:28.247907170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-rwx5z,Uid:68ad26cf-7232-424c-bc12-40ff2a082a9f,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:28.249737 containerd[1926]: time="2025-02-13T19:03:28.248782588Z" level=error msg="Failed to destroy network for sandbox \"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:28.250041 kubelet[2411]: E0213 19:03:28.249186 2411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:28.250041 kubelet[2411]: E0213 19:03:28.249285 2411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-rwx5z" Feb 13 19:03:28.250041 kubelet[2411]: E0213 19:03:28.249327 2411 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-rwx5z" Feb 13 19:03:28.250325 kubelet[2411]: E0213 19:03:28.249419 2411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-rwx5z_default(68ad26cf-7232-424c-bc12-40ff2a082a9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-rwx5z_default(68ad26cf-7232-424c-bc12-40ff2a082a9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-rwx5z" podUID="68ad26cf-7232-424c-bc12-40ff2a082a9f" Feb 13 19:03:28.250433 containerd[1926]: time="2025-02-13T19:03:28.250028537Z" level=error msg="encountered an error cleaning up failed sandbox \"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:28.250433 containerd[1926]: time="2025-02-13T19:03:28.250123625Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l4mms,Uid:099bb73c-5027-4347-b5a3-d47745bfcefe,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:28.251786 kubelet[2411]: E0213 19:03:28.251466 2411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:28.251786 kubelet[2411]: E0213 19:03:28.251565 2411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l4mms" Feb 13 19:03:28.251786 kubelet[2411]: E0213 19:03:28.251600 2411 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l4mms" Feb 13 19:03:28.252062 kubelet[2411]: E0213 19:03:28.251697 2411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l4mms_calico-system(099bb73c-5027-4347-b5a3-d47745bfcefe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l4mms_calico-system(099bb73c-5027-4347-b5a3-d47745bfcefe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l4mms" podUID="099bb73c-5027-4347-b5a3-d47745bfcefe" Feb 13 19:03:28.681869 kubelet[2411]: E0213 19:03:28.681489 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:28.905638 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a-shm.mount: Deactivated successfully. Feb 13 19:03:28.905807 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee-shm.mount: Deactivated successfully. Feb 13 19:03:28.997804 kubelet[2411]: I0213 19:03:28.997550 2411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a" Feb 13 19:03:29.000107 containerd[1926]: time="2025-02-13T19:03:29.000055554Z" level=info msg="StopPodSandbox for \"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\"" Feb 13 19:03:29.001320 containerd[1926]: time="2025-02-13T19:03:29.000882241Z" level=info msg="Ensure that sandbox 1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a in task-service has been cleanup successfully" Feb 13 19:03:29.004662 containerd[1926]: time="2025-02-13T19:03:29.004305772Z" level=info msg="TearDown network for sandbox \"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\" successfully" Feb 13 19:03:29.004662 containerd[1926]: time="2025-02-13T19:03:29.004354264Z" level=info msg="StopPodSandbox for \"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\" returns successfully" Feb 13 19:03:29.006252 containerd[1926]: time="2025-02-13T19:03:29.005701226Z" level=info msg="StopPodSandbox for \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\"" Feb 13 19:03:29.006252 containerd[1926]: time="2025-02-13T19:03:29.005862761Z" level=info msg="TearDown network for sandbox \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\" successfully" Feb 13 19:03:29.006252 containerd[1926]: time="2025-02-13T19:03:29.005884038Z" level=info msg="StopPodSandbox for \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\" returns successfully" Feb 13 19:03:29.007454 containerd[1926]: time="2025-02-13T19:03:29.007411858Z" level=info msg="StopPodSandbox for \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\"" Feb 13 19:03:29.007903 systemd[1]: run-netns-cni\x2d803da834\x2da7b7\x2d5fff\x2dc940\x2dc2c666f21f63.mount: Deactivated successfully. Feb 13 19:03:29.009686 containerd[1926]: time="2025-02-13T19:03:29.008352835Z" level=info msg="TearDown network for sandbox \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\" successfully" Feb 13 19:03:29.009686 containerd[1926]: time="2025-02-13T19:03:29.008996146Z" level=info msg="StopPodSandbox for \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\" returns successfully" Feb 13 19:03:29.012163 containerd[1926]: time="2025-02-13T19:03:29.011363893Z" level=info msg="StopPodSandbox for \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\"" Feb 13 19:03:29.012163 containerd[1926]: time="2025-02-13T19:03:29.011569734Z" level=info msg="TearDown network for sandbox \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\" successfully" Feb 13 19:03:29.012163 containerd[1926]: time="2025-02-13T19:03:29.011592847Z" level=info msg="StopPodSandbox for \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\" returns successfully" Feb 13 19:03:29.012922 containerd[1926]: time="2025-02-13T19:03:29.012880379Z" level=info msg="StopPodSandbox for \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\"" Feb 13 19:03:29.013302 containerd[1926]: time="2025-02-13T19:03:29.013083869Z" level=info msg="TearDown network for sandbox \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\" successfully" Feb 13 19:03:29.013302 containerd[1926]: time="2025-02-13T19:03:29.013108049Z" level=info msg="StopPodSandbox for \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\" returns successfully" Feb 13 19:03:29.014149 kubelet[2411]: I0213 19:03:29.014075 2411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee" Feb 13 19:03:29.015433 containerd[1926]: time="2025-02-13T19:03:29.015366855Z" level=info msg="StopPodSandbox for \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\"" Feb 13 19:03:29.015672 containerd[1926]: time="2025-02-13T19:03:29.015600174Z" level=info msg="TearDown network for sandbox \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\" successfully" Feb 13 19:03:29.015855 containerd[1926]: time="2025-02-13T19:03:29.015668948Z" level=info msg="StopPodSandbox for \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\" returns successfully" Feb 13 19:03:29.016441 containerd[1926]: time="2025-02-13T19:03:29.016330069Z" level=info msg="StopPodSandbox for \"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\"" Feb 13 19:03:29.018258 containerd[1926]: time="2025-02-13T19:03:29.016622195Z" level=info msg="Ensure that sandbox f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee in task-service has been cleanup successfully" Feb 13 19:03:29.018258 containerd[1926]: time="2025-02-13T19:03:29.016901211Z" level=info msg="TearDown network for sandbox \"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\" successfully" Feb 13 19:03:29.018258 containerd[1926]: time="2025-02-13T19:03:29.016952953Z" level=info msg="StopPodSandbox for \"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\" returns successfully" Feb 13 19:03:29.023750 containerd[1926]: time="2025-02-13T19:03:29.023686877Z" level=info msg="StopPodSandbox for \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\"" Feb 13 19:03:29.024293 containerd[1926]: time="2025-02-13T19:03:29.023906631Z" level=info msg="TearDown network for sandbox \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\" successfully" Feb 13 19:03:29.024401 containerd[1926]: time="2025-02-13T19:03:29.024289311Z" level=info msg="StopPodSandbox for \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\" returns successfully" Feb 13 19:03:29.024883 systemd[1]: run-netns-cni\x2df5e2726b\x2d2452\x2d07f7\x2d17b2\x2d4a1336c6e586.mount: Deactivated successfully. Feb 13 19:03:29.025777 containerd[1926]: time="2025-02-13T19:03:29.024151620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l4mms,Uid:099bb73c-5027-4347-b5a3-d47745bfcefe,Namespace:calico-system,Attempt:6,}" Feb 13 19:03:29.029152 containerd[1926]: time="2025-02-13T19:03:29.029106977Z" level=info msg="StopPodSandbox for \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\"" Feb 13 19:03:29.029902 containerd[1926]: time="2025-02-13T19:03:29.029854575Z" level=info msg="TearDown network for sandbox \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\" successfully" Feb 13 19:03:29.029902 containerd[1926]: time="2025-02-13T19:03:29.029892848Z" level=info msg="StopPodSandbox for \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\" returns successfully" Feb 13 19:03:29.031297 containerd[1926]: time="2025-02-13T19:03:29.031192062Z" level=info msg="StopPodSandbox for \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\"" Feb 13 19:03:29.031915 containerd[1926]: time="2025-02-13T19:03:29.031860620Z" level=info msg="TearDown network for sandbox \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\" successfully" Feb 13 19:03:29.031997 containerd[1926]: time="2025-02-13T19:03:29.031923132Z" level=info msg="StopPodSandbox for \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\" returns successfully" Feb 13 19:03:29.034203 containerd[1926]: time="2025-02-13T19:03:29.034117555Z" level=info msg="StopPodSandbox for \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\"" Feb 13 19:03:29.035310 containerd[1926]: time="2025-02-13T19:03:29.034695690Z" level=info msg="TearDown network for sandbox \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\" successfully" Feb 13 19:03:29.035310 containerd[1926]: time="2025-02-13T19:03:29.034729597Z" level=info msg="StopPodSandbox for \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\" returns successfully" Feb 13 19:03:29.037158 containerd[1926]: time="2025-02-13T19:03:29.037079377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-rwx5z,Uid:68ad26cf-7232-424c-bc12-40ff2a082a9f,Namespace:default,Attempt:5,}" Feb 13 19:03:29.244634 containerd[1926]: time="2025-02-13T19:03:29.244381431Z" level=error msg="Failed to destroy network for sandbox \"f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:29.249949 containerd[1926]: time="2025-02-13T19:03:29.247427631Z" level=error msg="encountered an error cleaning up failed sandbox \"f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:29.249949 containerd[1926]: time="2025-02-13T19:03:29.247566318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-rwx5z,Uid:68ad26cf-7232-424c-bc12-40ff2a082a9f,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:29.250184 kubelet[2411]: E0213 19:03:29.247831 2411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:29.250184 kubelet[2411]: E0213 19:03:29.247900 2411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-rwx5z" Feb 13 19:03:29.250184 kubelet[2411]: E0213 19:03:29.247954 2411 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-rwx5z" Feb 13 19:03:29.250395 kubelet[2411]: E0213 19:03:29.248024 2411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-rwx5z_default(68ad26cf-7232-424c-bc12-40ff2a082a9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-rwx5z_default(68ad26cf-7232-424c-bc12-40ff2a082a9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-rwx5z" podUID="68ad26cf-7232-424c-bc12-40ff2a082a9f" Feb 13 19:03:29.258807 containerd[1926]: time="2025-02-13T19:03:29.258684912Z" level=error msg="Failed to destroy network for sandbox \"155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:29.259691 containerd[1926]: time="2025-02-13T19:03:29.259626920Z" level=error msg="encountered an error cleaning up failed sandbox \"155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:29.259864 containerd[1926]: time="2025-02-13T19:03:29.259725019Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l4mms,Uid:099bb73c-5027-4347-b5a3-d47745bfcefe,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:29.260060 kubelet[2411]: E0213 19:03:29.260006 2411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:03:29.260337 kubelet[2411]: E0213 19:03:29.260084 2411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l4mms" Feb 13 19:03:29.260337 kubelet[2411]: E0213 19:03:29.260125 2411 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l4mms" Feb 13 19:03:29.260337 kubelet[2411]: E0213 19:03:29.260202 2411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l4mms_calico-system(099bb73c-5027-4347-b5a3-d47745bfcefe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l4mms_calico-system(099bb73c-5027-4347-b5a3-d47745bfcefe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l4mms" podUID="099bb73c-5027-4347-b5a3-d47745bfcefe" Feb 13 19:03:29.682517 kubelet[2411]: E0213 19:03:29.682338 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:29.749054 containerd[1926]: time="2025-02-13T19:03:29.748411519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:29.751159 containerd[1926]: time="2025-02-13T19:03:29.751092058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 19:03:29.753492 containerd[1926]: time="2025-02-13T19:03:29.753418510Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:29.759260 containerd[1926]: time="2025-02-13T19:03:29.757964632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:29.759260 containerd[1926]: time="2025-02-13T19:03:29.759146833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.860501921s" Feb 13 19:03:29.759260 containerd[1926]: time="2025-02-13T19:03:29.759185849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 19:03:29.771140 containerd[1926]: time="2025-02-13T19:03:29.771087544Z" level=info msg="CreateContainer within sandbox \"01beb6d7da4688b5a48ea1e2025ced4c7f63ac03c77e4ea10f93182b8b871fd4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:03:29.798900 containerd[1926]: time="2025-02-13T19:03:29.798841336Z" level=info msg="CreateContainer within sandbox \"01beb6d7da4688b5a48ea1e2025ced4c7f63ac03c77e4ea10f93182b8b871fd4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c9b012c2e9be026aa94e9698754852c6bd9c86d3952c68186a4334357eddfab1\"" Feb 13 19:03:29.799838 containerd[1926]: time="2025-02-13T19:03:29.799796178Z" level=info msg="StartContainer for \"c9b012c2e9be026aa94e9698754852c6bd9c86d3952c68186a4334357eddfab1\"" Feb 13 19:03:29.850724 systemd[1]: Started cri-containerd-c9b012c2e9be026aa94e9698754852c6bd9c86d3952c68186a4334357eddfab1.scope - libcontainer container c9b012c2e9be026aa94e9698754852c6bd9c86d3952c68186a4334357eddfab1. Feb 13 19:03:29.909667 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15-shm.mount: Deactivated successfully. Feb 13 19:03:29.909968 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed-shm.mount: Deactivated successfully. Feb 13 19:03:29.910096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2095284734.mount: Deactivated successfully. Feb 13 19:03:29.922075 containerd[1926]: time="2025-02-13T19:03:29.921898553Z" level=info msg="StartContainer for \"c9b012c2e9be026aa94e9698754852c6bd9c86d3952c68186a4334357eddfab1\" returns successfully" Feb 13 19:03:30.030033 kubelet[2411]: I0213 19:03:30.029844 2411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed" Feb 13 19:03:30.033000 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:03:30.033169 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:03:30.033213 containerd[1926]: time="2025-02-13T19:03:30.032452193Z" level=info msg="StopPodSandbox for \"155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed\"" Feb 13 19:03:30.034723 containerd[1926]: time="2025-02-13T19:03:30.033799731Z" level=info msg="Ensure that sandbox 155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed in task-service has been cleanup successfully" Feb 13 19:03:30.038166 containerd[1926]: time="2025-02-13T19:03:30.037970932Z" level=info msg="TearDown network for sandbox \"155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed\" successfully" Feb 13 19:03:30.038166 containerd[1926]: time="2025-02-13T19:03:30.038030494Z" level=info msg="StopPodSandbox for \"155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed\" returns successfully" Feb 13 19:03:30.038800 systemd[1]: run-netns-cni\x2dbfbc74cc\x2d9e00\x2d0ce4\x2d76dc\x2d2783ddc58281.mount: Deactivated successfully. Feb 13 19:03:30.042310 containerd[1926]: time="2025-02-13T19:03:30.040387735Z" level=info msg="StopPodSandbox for \"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\"" Feb 13 19:03:30.042310 containerd[1926]: time="2025-02-13T19:03:30.040585912Z" level=info msg="TearDown network for sandbox \"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\" successfully" Feb 13 19:03:30.042310 containerd[1926]: time="2025-02-13T19:03:30.040614505Z" level=info msg="StopPodSandbox for \"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\" returns successfully" Feb 13 19:03:30.043261 containerd[1926]: time="2025-02-13T19:03:30.043175021Z" level=info msg="StopPodSandbox for \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\"" Feb 13 19:03:30.043775 containerd[1926]: time="2025-02-13T19:03:30.043592496Z" level=info msg="TearDown network for sandbox \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\" successfully" Feb 13 19:03:30.043775 containerd[1926]: time="2025-02-13T19:03:30.043627986Z" level=info msg="StopPodSandbox for \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\" returns successfully" Feb 13 19:03:30.045520 containerd[1926]: time="2025-02-13T19:03:30.044831008Z" level=info msg="StopPodSandbox for \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\"" Feb 13 19:03:30.045520 containerd[1926]: time="2025-02-13T19:03:30.045052166Z" level=info msg="TearDown network for sandbox \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\" successfully" Feb 13 19:03:30.045520 containerd[1926]: time="2025-02-13T19:03:30.045078157Z" level=info msg="StopPodSandbox for \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\" returns successfully" Feb 13 19:03:30.047414 containerd[1926]: time="2025-02-13T19:03:30.047368183Z" level=info msg="StopPodSandbox for \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\"" Feb 13 19:03:30.048550 containerd[1926]: time="2025-02-13T19:03:30.048246827Z" level=info msg="TearDown network for sandbox \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\" successfully" Feb 13 19:03:30.048550 containerd[1926]: time="2025-02-13T19:03:30.048290258Z" level=info msg="StopPodSandbox for \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\" returns successfully" Feb 13 19:03:30.050535 containerd[1926]: time="2025-02-13T19:03:30.050488602Z" level=info msg="StopPodSandbox for \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\"" Feb 13 19:03:30.051688 containerd[1926]: time="2025-02-13T19:03:30.051555001Z" level=info msg="TearDown network for sandbox \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\" successfully" Feb 13 19:03:30.051688 containerd[1926]: time="2025-02-13T19:03:30.051601573Z" level=info msg="StopPodSandbox for \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\" returns successfully" Feb 13 19:03:30.052416 kubelet[2411]: I0213 19:03:30.052102 2411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15" Feb 13 19:03:30.053512 containerd[1926]: time="2025-02-13T19:03:30.052731960Z" level=info msg="StopPodSandbox for \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\"" Feb 13 19:03:30.053512 containerd[1926]: time="2025-02-13T19:03:30.052970784Z" level=info msg="TearDown network for sandbox \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\" successfully" Feb 13 19:03:30.053512 containerd[1926]: time="2025-02-13T19:03:30.052996823Z" level=info msg="StopPodSandbox for \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\" returns successfully" Feb 13 19:03:30.053885 containerd[1926]: time="2025-02-13T19:03:30.053781099Z" level=info msg="StopPodSandbox for \"f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15\"" Feb 13 19:03:30.054175 containerd[1926]: time="2025-02-13T19:03:30.054053363Z" level=info msg="Ensure that sandbox f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15 in task-service has been cleanup successfully" Feb 13 19:03:30.057267 containerd[1926]: time="2025-02-13T19:03:30.054591390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l4mms,Uid:099bb73c-5027-4347-b5a3-d47745bfcefe,Namespace:calico-system,Attempt:7,}" Feb 13 19:03:30.057561 containerd[1926]: time="2025-02-13T19:03:30.057518309Z" level=info msg="TearDown network for sandbox \"f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15\" successfully" Feb 13 19:03:30.057679 containerd[1926]: time="2025-02-13T19:03:30.057652258Z" level=info msg="StopPodSandbox for \"f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15\" returns successfully" Feb 13 19:03:30.058812 containerd[1926]: time="2025-02-13T19:03:30.058753068Z" level=info msg="StopPodSandbox for \"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\"" Feb 13 19:03:30.058952 containerd[1926]: time="2025-02-13T19:03:30.058925110Z" level=info msg="TearDown network for sandbox \"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\" successfully" Feb 13 19:03:30.059005 containerd[1926]: time="2025-02-13T19:03:30.058949709Z" level=info msg="StopPodSandbox for \"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\" returns successfully" Feb 13 19:03:30.059813 systemd[1]: run-netns-cni\x2d3e1f88be\x2db8df\x2d51e4\x2dd5b3\x2de6af04a41499.mount: Deactivated successfully. Feb 13 19:03:30.063684 containerd[1926]: time="2025-02-13T19:03:30.063013205Z" level=info msg="StopPodSandbox for \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\"" Feb 13 19:03:30.064506 containerd[1926]: time="2025-02-13T19:03:30.064451417Z" level=info msg="TearDown network for sandbox \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\" successfully" Feb 13 19:03:30.068387 containerd[1926]: time="2025-02-13T19:03:30.066825869Z" level=info msg="StopPodSandbox for \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\" returns successfully" Feb 13 19:03:30.072825 containerd[1926]: time="2025-02-13T19:03:30.072730863Z" level=info msg="StopPodSandbox for \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\"" Feb 13 19:03:30.075890 containerd[1926]: time="2025-02-13T19:03:30.073019043Z" level=info msg="TearDown network for sandbox \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\" successfully" Feb 13 19:03:30.075890 containerd[1926]: time="2025-02-13T19:03:30.073080273Z" level=info msg="StopPodSandbox for \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\" returns successfully" Feb 13 19:03:30.076297 containerd[1926]: time="2025-02-13T19:03:30.076179294Z" level=info msg="StopPodSandbox for \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\"" Feb 13 19:03:30.076565 containerd[1926]: time="2025-02-13T19:03:30.076466743Z" level=info msg="TearDown network for sandbox \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\" successfully" Feb 13 19:03:30.076565 containerd[1926]: time="2025-02-13T19:03:30.076541189Z" level=info msg="StopPodSandbox for \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\" returns successfully" Feb 13 19:03:30.078628 containerd[1926]: time="2025-02-13T19:03:30.078485273Z" level=info msg="StopPodSandbox for \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\"" Feb 13 19:03:30.080430 containerd[1926]: time="2025-02-13T19:03:30.080364745Z" level=info msg="TearDown network for sandbox \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\" successfully" Feb 13 19:03:30.080578 containerd[1926]: time="2025-02-13T19:03:30.080438268Z" level=info msg="StopPodSandbox for \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\" returns successfully" Feb 13 19:03:30.084415 containerd[1926]: time="2025-02-13T19:03:30.084352066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-rwx5z,Uid:68ad26cf-7232-424c-bc12-40ff2a082a9f,Namespace:default,Attempt:6,}" Feb 13 19:03:30.497478 (udev-worker)[3390]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:03:30.502832 systemd-networkd[1831]: cali31db25f458a: Link UP Feb 13 19:03:30.503401 systemd-networkd[1831]: cali31db25f458a: Gained carrier Feb 13 19:03:30.533715 kubelet[2411]: I0213 19:03:30.533372 2411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gn7vj" podStartSLOduration=3.75589848 podStartE2EDuration="20.533348333s" podCreationTimestamp="2025-02-13 19:03:10 +0000 UTC" firstStartedPulling="2025-02-13 19:03:12.982992776 +0000 UTC m=+3.550334488" lastFinishedPulling="2025-02-13 19:03:29.760442617 +0000 UTC m=+20.327784341" observedRunningTime="2025-02-13 19:03:30.046537838 +0000 UTC m=+20.613879574" watchObservedRunningTime="2025-02-13 19:03:30.533348333 +0000 UTC m=+21.100690057" Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.236 [INFO][3407] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.265 [INFO][3407] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.23.196-k8s-nginx--deployment--8587fbcb89--rwx5z-eth0 nginx-deployment-8587fbcb89- default 68ad26cf-7232-424c-bc12-40ff2a082a9f 1095 0 2025-02-13 19:03:23 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.23.196 nginx-deployment-8587fbcb89-rwx5z eth0 default [] [] [kns.default ksa.default.default] cali31db25f458a [] []}} ContainerID="46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" Namespace="default" Pod="nginx-deployment-8587fbcb89-rwx5z" WorkloadEndpoint="172.31.23.196-k8s-nginx--deployment--8587fbcb89--rwx5z-" Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.265 [INFO][3407] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" Namespace="default" Pod="nginx-deployment-8587fbcb89-rwx5z" WorkloadEndpoint="172.31.23.196-k8s-nginx--deployment--8587fbcb89--rwx5z-eth0" Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.340 [INFO][3430] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" HandleID="k8s-pod-network.46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" Workload="172.31.23.196-k8s-nginx--deployment--8587fbcb89--rwx5z-eth0" Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.363 [INFO][3430] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" HandleID="k8s-pod-network.46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" Workload="172.31.23.196-k8s-nginx--deployment--8587fbcb89--rwx5z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000319930), Attrs:map[string]string{"namespace":"default", "node":"172.31.23.196", "pod":"nginx-deployment-8587fbcb89-rwx5z", "timestamp":"2025-02-13 19:03:30.340381928 +0000 UTC"}, Hostname:"172.31.23.196", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.364 [INFO][3430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.364 [INFO][3430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.364 [INFO][3430] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.23.196' Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.367 [INFO][3430] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" host="172.31.23.196" Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.383 [INFO][3430] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.23.196" Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.391 [INFO][3430] ipam/ipam.go 489: Trying affinity for 192.168.81.0/26 host="172.31.23.196" Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.394 [INFO][3430] ipam/ipam.go 155: Attempting to load block cidr=192.168.81.0/26 host="172.31.23.196" Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.400 [INFO][3430] ipam/ipam.go 205: Affinity has not been confirmed - attempt to confirm it cidr=192.168.81.0/26 host="172.31.23.196" Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.406 [ERROR][3430] ipam/customresource.go 184: Error updating resource Key=BlockAffinity(172.31.23.196-192-168-81-0-26) Name="172.31.23.196-192-168-81-0-26" Resource="BlockAffinities" Value=&v3.BlockAffinity{TypeMeta:v1.TypeMeta{Kind:"BlockAffinity", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.196-192-168-81-0-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.BlockAffinitySpec{State:"pending", Node:"172.31.23.196", CIDR:"192.168.81.0/26", Deleted:"false"}} error=Operation cannot be fulfilled on blockaffinities.crd.projectcalico.org "172.31.23.196-192-168-81-0-26": the object has been modified; please apply your changes to the latest version and try again Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.406 [WARNING][3430] ipam/ipam.go 209: Error marking affinity as pending as part of confirmation process cidr=192.168.81.0/26 error=update conflict: BlockAffinity(172.31.23.196-192-168-81-0-26) host="172.31.23.196" Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.407 [INFO][3430] ipam/ipam.go 489: Trying affinity for 192.168.81.0/26 host="172.31.23.196" Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.410 [INFO][3430] ipam/ipam.go 155: Attempting to load block cidr=192.168.81.0/26 host="172.31.23.196" Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.413 [INFO][3430] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.81.0/26 host="172.31.23.196" Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.413 [INFO][3430] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.81.0/26 handle="k8s-pod-network.46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" host="172.31.23.196" Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.415 [INFO][3430] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3 Feb 13 19:03:30.537272 containerd[1926]: 2025-02-13 19:03:30.421 [INFO][3430] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.81.0/26 handle="k8s-pod-network.46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" host="172.31.23.196" Feb 13 19:03:30.538748 containerd[1926]: 2025-02-13 19:03:30.430 [ERROR][3430] ipam/customresource.go 184: Error updating resource Key=IPAMBlock(192-168-81-0-26) Name="192-168-81-0-26" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"192-168-81-0-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.IPAMBlockSpec{CIDR:"192.168.81.0/26", Affinity:(*string)(0x40002ed7d0), Allocations:[]*int{(*int)(0x4000102d18), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0x4000319930), AttrSecondary:map[string]string{"namespace":"default", "node":"172.31.23.196", "pod":"nginx-deployment-8587fbcb89-rwx5z", "timestamp":"2025-02-13 19:03:30.340381928 +0000 UTC"}}}, SequenceNumber:0x1823d9d9c263f03e, SequenceNumberForAllocation:map[string]uint64{"0":0x1823d9d9c263f03d}, Deleted:false, DeprecatedStrictAffinity:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "192-168-81-0-26": the object has been modified; please apply your changes to the latest version and try again Feb 13 19:03:30.538748 containerd[1926]: 2025-02-13 19:03:30.430 [INFO][3430] ipam/ipam.go 1207: Failed to update block block=192.168.81.0/26 error=update conflict: IPAMBlock(192-168-81-0-26) handle="k8s-pod-network.46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" host="172.31.23.196" Feb 13 19:03:30.538748 containerd[1926]: 2025-02-13 19:03:30.458 [INFO][3430] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.81.0/26 handle="k8s-pod-network.46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" host="172.31.23.196" Feb 13 19:03:30.538748 containerd[1926]: 2025-02-13 19:03:30.462 [INFO][3430] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3 Feb 13 19:03:30.538748 containerd[1926]: 2025-02-13 19:03:30.471 [INFO][3430] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.81.0/26 handle="k8s-pod-network.46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" host="172.31.23.196" Feb 13 19:03:30.538748 containerd[1926]: 2025-02-13 19:03:30.479 [INFO][3430] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.81.1/26] block=192.168.81.0/26 handle="k8s-pod-network.46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" host="172.31.23.196" Feb 13 19:03:30.538748 containerd[1926]: 2025-02-13 19:03:30.479 [INFO][3430] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.81.1/26] handle="k8s-pod-network.46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" host="172.31.23.196" Feb 13 19:03:30.538748 containerd[1926]: 2025-02-13 19:03:30.479 [INFO][3430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:03:30.538748 containerd[1926]: 2025-02-13 19:03:30.479 [INFO][3430] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.1/26] IPv6=[] ContainerID="46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" HandleID="k8s-pod-network.46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" Workload="172.31.23.196-k8s-nginx--deployment--8587fbcb89--rwx5z-eth0" Feb 13 19:03:30.538748 containerd[1926]: 2025-02-13 19:03:30.485 [INFO][3407] cni-plugin/k8s.go 386: Populated endpoint ContainerID="46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" Namespace="default" Pod="nginx-deployment-8587fbcb89-rwx5z" WorkloadEndpoint="172.31.23.196-k8s-nginx--deployment--8587fbcb89--rwx5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.196-k8s-nginx--deployment--8587fbcb89--rwx5z-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"68ad26cf-7232-424c-bc12-40ff2a082a9f", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 3, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.196", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-rwx5z", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.81.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali31db25f458a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:03:30.539523 containerd[1926]: 2025-02-13 19:03:30.485 [INFO][3407] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.81.1/32] ContainerID="46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" Namespace="default" Pod="nginx-deployment-8587fbcb89-rwx5z" WorkloadEndpoint="172.31.23.196-k8s-nginx--deployment--8587fbcb89--rwx5z-eth0" Feb 13 19:03:30.539523 containerd[1926]: 2025-02-13 19:03:30.485 [INFO][3407] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31db25f458a ContainerID="46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" Namespace="default" Pod="nginx-deployment-8587fbcb89-rwx5z" WorkloadEndpoint="172.31.23.196-k8s-nginx--deployment--8587fbcb89--rwx5z-eth0" Feb 13 19:03:30.539523 containerd[1926]: 2025-02-13 19:03:30.502 [INFO][3407] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" Namespace="default" Pod="nginx-deployment-8587fbcb89-rwx5z" WorkloadEndpoint="172.31.23.196-k8s-nginx--deployment--8587fbcb89--rwx5z-eth0" Feb 13 19:03:30.539523 containerd[1926]: 2025-02-13 19:03:30.508 [INFO][3407] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" Namespace="default" Pod="nginx-deployment-8587fbcb89-rwx5z" WorkloadEndpoint="172.31.23.196-k8s-nginx--deployment--8587fbcb89--rwx5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.196-k8s-nginx--deployment--8587fbcb89--rwx5z-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"68ad26cf-7232-424c-bc12-40ff2a082a9f", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 3, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.196", ContainerID:"46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3", Pod:"nginx-deployment-8587fbcb89-rwx5z", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.81.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali31db25f458a", MAC:"7e:47:b7:40:96:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:03:30.539523 containerd[1926]: 2025-02-13 19:03:30.533 [INFO][3407] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3" Namespace="default" Pod="nginx-deployment-8587fbcb89-rwx5z" WorkloadEndpoint="172.31.23.196-k8s-nginx--deployment--8587fbcb89--rwx5z-eth0" Feb 13 19:03:30.572025 containerd[1926]: time="2025-02-13T19:03:30.571656761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:30.572025 containerd[1926]: time="2025-02-13T19:03:30.571760065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:30.572025 containerd[1926]: time="2025-02-13T19:03:30.571795316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:30.572025 containerd[1926]: time="2025-02-13T19:03:30.571951466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:30.584694 systemd-networkd[1831]: cali5617097588d: Link UP Feb 13 19:03:30.585085 systemd-networkd[1831]: cali5617097588d: Gained carrier Feb 13 19:03:30.588497 (udev-worker)[3391]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:03:30.618541 systemd[1]: Started cri-containerd-46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3.scope - libcontainer container 46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3. Feb 13 19:03:30.629363 containerd[1926]: 2025-02-13 19:03:30.209 [INFO][3397] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:03:30.629363 containerd[1926]: 2025-02-13 19:03:30.246 [INFO][3397] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.23.196-k8s-csi--node--driver--l4mms-eth0 csi-node-driver- calico-system 099bb73c-5027-4347-b5a3-d47745bfcefe 996 0 2025-02-13 19:03:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.23.196 csi-node-driver-l4mms eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5617097588d [] []}} ContainerID="a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505" Namespace="calico-system" Pod="csi-node-driver-l4mms" WorkloadEndpoint="172.31.23.196-k8s-csi--node--driver--l4mms-" Feb 13 19:03:30.629363 containerd[1926]: 2025-02-13 19:03:30.246 [INFO][3397] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505" Namespace="calico-system" Pod="csi-node-driver-l4mms" WorkloadEndpoint="172.31.23.196-k8s-csi--node--driver--l4mms-eth0" Feb 13 19:03:30.629363 containerd[1926]: 2025-02-13 19:03:30.348 [INFO][3426] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505" HandleID="k8s-pod-network.a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505" Workload="172.31.23.196-k8s-csi--node--driver--l4mms-eth0" Feb 13 19:03:30.629363 containerd[1926]: 2025-02-13 19:03:30.369 [INFO][3426] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505" HandleID="k8s-pod-network.a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505" Workload="172.31.23.196-k8s-csi--node--driver--l4mms-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003036c0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.23.196", "pod":"csi-node-driver-l4mms", "timestamp":"2025-02-13 19:03:30.348293818 +0000 UTC"}, Hostname:"172.31.23.196", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:03:30.629363 containerd[1926]: 2025-02-13 19:03:30.370 [INFO][3426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:03:30.629363 containerd[1926]: 2025-02-13 19:03:30.479 [INFO][3426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:03:30.629363 containerd[1926]: 2025-02-13 19:03:30.480 [INFO][3426] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.23.196' Feb 13 19:03:30.629363 containerd[1926]: 2025-02-13 19:03:30.486 [INFO][3426] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505" host="172.31.23.196" Feb 13 19:03:30.629363 containerd[1926]: 2025-02-13 19:03:30.498 [INFO][3426] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.23.196" Feb 13 19:03:30.629363 containerd[1926]: 2025-02-13 19:03:30.513 [INFO][3426] ipam/ipam.go 489: Trying affinity for 192.168.81.0/26 host="172.31.23.196" Feb 13 19:03:30.629363 containerd[1926]: 2025-02-13 19:03:30.524 [INFO][3426] ipam/ipam.go 155: Attempting to load block cidr=192.168.81.0/26 host="172.31.23.196" Feb 13 19:03:30.629363 containerd[1926]: 2025-02-13 19:03:30.536 [INFO][3426] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.81.0/26 host="172.31.23.196" Feb 13 19:03:30.629363 containerd[1926]: 2025-02-13 19:03:30.537 [INFO][3426] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.81.0/26 handle="k8s-pod-network.a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505" host="172.31.23.196" Feb 13 19:03:30.629363 containerd[1926]: 2025-02-13 19:03:30.542 [INFO][3426] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505 Feb 13 19:03:30.629363 containerd[1926]: 2025-02-13 19:03:30.556 [INFO][3426] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.81.0/26 handle="k8s-pod-network.a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505" host="172.31.23.196" Feb 13 19:03:30.629363 containerd[1926]: 2025-02-13 19:03:30.575 [INFO][3426] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.81.2/26] block=192.168.81.0/26 handle="k8s-pod-network.a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505" host="172.31.23.196" Feb 13 19:03:30.629363 containerd[1926]: 2025-02-13 19:03:30.575 [INFO][3426] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.81.2/26] handle="k8s-pod-network.a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505" host="172.31.23.196" Feb 13 19:03:30.629363 containerd[1926]: 2025-02-13 19:03:30.575 [INFO][3426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:03:30.629363 containerd[1926]: 2025-02-13 19:03:30.575 [INFO][3426] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.2/26] IPv6=[] ContainerID="a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505" HandleID="k8s-pod-network.a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505" Workload="172.31.23.196-k8s-csi--node--driver--l4mms-eth0" Feb 13 19:03:30.631413 containerd[1926]: 2025-02-13 19:03:30.578 [INFO][3397] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505" Namespace="calico-system" Pod="csi-node-driver-l4mms" WorkloadEndpoint="172.31.23.196-k8s-csi--node--driver--l4mms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.196-k8s-csi--node--driver--l4mms-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"099bb73c-5027-4347-b5a3-d47745bfcefe", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 3, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.196", ContainerID:"", Pod:"csi-node-driver-l4mms", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5617097588d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:03:30.631413 containerd[1926]: 2025-02-13 19:03:30.578 [INFO][3397] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.81.2/32] ContainerID="a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505" Namespace="calico-system" Pod="csi-node-driver-l4mms" WorkloadEndpoint="172.31.23.196-k8s-csi--node--driver--l4mms-eth0" Feb 13 19:03:30.631413 containerd[1926]: 2025-02-13 19:03:30.579 [INFO][3397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5617097588d ContainerID="a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505" Namespace="calico-system" Pod="csi-node-driver-l4mms" WorkloadEndpoint="172.31.23.196-k8s-csi--node--driver--l4mms-eth0" Feb 13 19:03:30.631413 containerd[1926]: 2025-02-13 19:03:30.584 [INFO][3397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505" Namespace="calico-system" Pod="csi-node-driver-l4mms" WorkloadEndpoint="172.31.23.196-k8s-csi--node--driver--l4mms-eth0" Feb 13 19:03:30.631413 containerd[1926]: 2025-02-13 19:03:30.586 [INFO][3397] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505" Namespace="calico-system" Pod="csi-node-driver-l4mms" WorkloadEndpoint="172.31.23.196-k8s-csi--node--driver--l4mms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.196-k8s-csi--node--driver--l4mms-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"099bb73c-5027-4347-b5a3-d47745bfcefe", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 3, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.196", ContainerID:"a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505", Pod:"csi-node-driver-l4mms", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5617097588d", MAC:"b2:21:bb:02:47:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:03:30.631413 containerd[1926]: 2025-02-13 19:03:30.625 [INFO][3397] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505" Namespace="calico-system" Pod="csi-node-driver-l4mms" WorkloadEndpoint="172.31.23.196-k8s-csi--node--driver--l4mms-eth0" Feb 13 19:03:30.670023 kubelet[2411]: E0213 19:03:30.669873 2411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:30.679961 containerd[1926]: time="2025-02-13T19:03:30.678971421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:30.679961 containerd[1926]: time="2025-02-13T19:03:30.679079211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:30.679961 containerd[1926]: time="2025-02-13T19:03:30.679108441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:30.681440 containerd[1926]: time="2025-02-13T19:03:30.681035877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:30.683285 kubelet[2411]: E0213 19:03:30.682702 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:30.697938 containerd[1926]: time="2025-02-13T19:03:30.697883932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-rwx5z,Uid:68ad26cf-7232-424c-bc12-40ff2a082a9f,Namespace:default,Attempt:6,} returns sandbox id \"46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3\"" Feb 13 19:03:30.703119 containerd[1926]: time="2025-02-13T19:03:30.702974342Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:03:30.718546 systemd[1]: Started cri-containerd-a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505.scope - libcontainer container a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505. Feb 13 19:03:30.761128 containerd[1926]: time="2025-02-13T19:03:30.760018766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l4mms,Uid:099bb73c-5027-4347-b5a3-d47745bfcefe,Namespace:calico-system,Attempt:7,} returns sandbox id \"a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505\"" Feb 13 19:03:31.683844 kubelet[2411]: E0213 19:03:31.683724 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:32.106167 systemd-networkd[1831]: cali31db25f458a: Gained IPv6LL Feb 13 19:03:32.163360 kernel: bpftool[3679]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:03:32.233802 systemd-networkd[1831]: cali5617097588d: Gained IPv6LL Feb 13 19:03:32.581488 systemd-networkd[1831]: vxlan.calico: Link UP Feb 13 19:03:32.581507 systemd-networkd[1831]: vxlan.calico: Gained carrier Feb 13 19:03:32.684362 kubelet[2411]: E0213 19:03:32.684315 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:33.687266 kubelet[2411]: E0213 19:03:33.686866 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:34.280785 systemd-networkd[1831]: vxlan.calico: Gained IPv6LL Feb 13 19:03:34.423181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1263990913.mount: Deactivated successfully. Feb 13 19:03:34.687681 kubelet[2411]: E0213 19:03:34.687624 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:35.688483 kubelet[2411]: E0213 19:03:35.688353 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:35.837904 containerd[1926]: time="2025-02-13T19:03:35.837536912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:35.839477 containerd[1926]: time="2025-02-13T19:03:35.839401295Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086" Feb 13 19:03:35.840134 containerd[1926]: time="2025-02-13T19:03:35.840048743Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:35.845126 containerd[1926]: time="2025-02-13T19:03:35.845073893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:35.847396 containerd[1926]: time="2025-02-13T19:03:35.847171152Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 5.144107479s" Feb 13 19:03:35.847396 containerd[1926]: time="2025-02-13T19:03:35.847250204Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:03:35.850559 containerd[1926]: time="2025-02-13T19:03:35.850374629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:03:35.852029 containerd[1926]: time="2025-02-13T19:03:35.851670041Z" level=info msg="CreateContainer within sandbox \"46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:03:35.875993 containerd[1926]: time="2025-02-13T19:03:35.875916824Z" level=info msg="CreateContainer within sandbox \"46f637442a3ac5d4f14a654b79c38652cf9f7e264a8575192708ccb3d6babde3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"81ce3b8610fb18f8ec23460d7a749f3e5c6b5f167714996f27d5a49f61e48d99\"" Feb 13 19:03:35.877183 containerd[1926]: time="2025-02-13T19:03:35.877053915Z" level=info msg="StartContainer for \"81ce3b8610fb18f8ec23460d7a749f3e5c6b5f167714996f27d5a49f61e48d99\"" Feb 13 19:03:35.930544 systemd[1]: Started cri-containerd-81ce3b8610fb18f8ec23460d7a749f3e5c6b5f167714996f27d5a49f61e48d99.scope - libcontainer container 81ce3b8610fb18f8ec23460d7a749f3e5c6b5f167714996f27d5a49f61e48d99. Feb 13 19:03:35.973520 containerd[1926]: time="2025-02-13T19:03:35.973146413Z" level=info msg="StartContainer for \"81ce3b8610fb18f8ec23460d7a749f3e5c6b5f167714996f27d5a49f61e48d99\" returns successfully" Feb 13 19:03:36.105608 kubelet[2411]: I0213 19:03:36.105514 2411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-rwx5z" podStartSLOduration=7.9575285220000005 podStartE2EDuration="13.105492108s" podCreationTimestamp="2025-02-13 19:03:23 +0000 UTC" firstStartedPulling="2025-02-13 19:03:30.70099123 +0000 UTC m=+21.268332966" lastFinishedPulling="2025-02-13 19:03:35.848954828 +0000 UTC m=+26.416296552" observedRunningTime="2025-02-13 19:03:36.105454914 +0000 UTC m=+26.672796650" watchObservedRunningTime="2025-02-13 19:03:36.105492108 +0000 UTC m=+26.672833832" Feb 13 19:03:36.689360 kubelet[2411]: E0213 19:03:36.689292 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:37.080086 ntpd[1905]: Listen normally on 7 vxlan.calico 192.168.81.0:123 Feb 13 19:03:37.081968 ntpd[1905]: 13 Feb 19:03:37 ntpd[1905]: Listen normally on 7 vxlan.calico 192.168.81.0:123 Feb 13 19:03:37.081968 ntpd[1905]: 13 Feb 19:03:37 ntpd[1905]: Listen normally on 8 cali31db25f458a [fe80::ecee:eeff:feee:eeee%3]:123 Feb 13 19:03:37.081968 ntpd[1905]: 13 Feb 19:03:37 ntpd[1905]: Listen normally on 9 cali5617097588d [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 19:03:37.081968 ntpd[1905]: 13 Feb 19:03:37 ntpd[1905]: Listen normally on 10 vxlan.calico [fe80::64da:dcff:fe0d:5a10%5]:123 Feb 13 19:03:37.080247 ntpd[1905]: Listen normally on 8 cali31db25f458a [fe80::ecee:eeff:feee:eeee%3]:123 Feb 13 19:03:37.080332 ntpd[1905]: Listen normally on 9 cali5617097588d [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 19:03:37.080400 ntpd[1905]: Listen normally on 10 vxlan.calico [fe80::64da:dcff:fe0d:5a10%5]:123 Feb 13 19:03:37.231183 containerd[1926]: time="2025-02-13T19:03:37.230873323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:37.232504 containerd[1926]: time="2025-02-13T19:03:37.232387014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 19:03:37.234426 containerd[1926]: time="2025-02-13T19:03:37.234338306Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:37.238261 containerd[1926]: time="2025-02-13T19:03:37.238079066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:37.239684 containerd[1926]: time="2025-02-13T19:03:37.239448865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.389022806s" Feb 13 19:03:37.239684 containerd[1926]: time="2025-02-13T19:03:37.239517819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 19:03:37.244188 containerd[1926]: time="2025-02-13T19:03:37.243853865Z" level=info msg="CreateContainer within sandbox \"a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:03:37.268008 containerd[1926]: time="2025-02-13T19:03:37.267949176Z" level=info msg="CreateContainer within sandbox \"a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0fa10983053393c76937b36e0d1c626d3f00a853e6217661ca30048c521601d3\"" Feb 13 19:03:37.269453 containerd[1926]: time="2025-02-13T19:03:37.269279370Z" level=info msg="StartContainer for \"0fa10983053393c76937b36e0d1c626d3f00a853e6217661ca30048c521601d3\"" Feb 13 19:03:37.325553 systemd[1]: Started cri-containerd-0fa10983053393c76937b36e0d1c626d3f00a853e6217661ca30048c521601d3.scope - libcontainer container 0fa10983053393c76937b36e0d1c626d3f00a853e6217661ca30048c521601d3. Feb 13 19:03:37.379840 containerd[1926]: time="2025-02-13T19:03:37.378793646Z" level=info msg="StartContainer for \"0fa10983053393c76937b36e0d1c626d3f00a853e6217661ca30048c521601d3\" returns successfully" Feb 13 19:03:37.383101 containerd[1926]: time="2025-02-13T19:03:37.383036068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:03:37.675298 update_engine[1910]: I20250213 19:03:37.674531 1910 update_attempter.cc:509] Updating boot flags... Feb 13 19:03:37.690537 kubelet[2411]: E0213 19:03:37.690426 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:37.779435 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3882) Feb 13 19:03:38.019359 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3881) Feb 13 19:03:38.309319 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3881) Feb 13 19:03:38.691642 kubelet[2411]: E0213 19:03:38.691552 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:38.932864 containerd[1926]: time="2025-02-13T19:03:38.932783667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:38.934332 containerd[1926]: time="2025-02-13T19:03:38.934241622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 19:03:38.935494 containerd[1926]: time="2025-02-13T19:03:38.935424410Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:38.939057 containerd[1926]: time="2025-02-13T19:03:38.939004079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:38.940880 containerd[1926]: time="2025-02-13T19:03:38.940512925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.557413144s" Feb 13 19:03:38.940880 containerd[1926]: time="2025-02-13T19:03:38.940568697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 19:03:38.945027 containerd[1926]: time="2025-02-13T19:03:38.944880576Z" level=info msg="CreateContainer within sandbox \"a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:03:38.968435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1675942504.mount: Deactivated successfully. Feb 13 19:03:38.972866 containerd[1926]: time="2025-02-13T19:03:38.972793768Z" level=info msg="CreateContainer within sandbox \"a7cb3b8d6b5c3875724d93111ed95a8ef6a25a511891c3a9de19b99f2f959505\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a2ba52771cf785d99fd7067653e9b392cddcb37ac26577d3d42ce2aa4f73c744\"" Feb 13 19:03:38.974625 containerd[1926]: time="2025-02-13T19:03:38.973484107Z" level=info msg="StartContainer for \"a2ba52771cf785d99fd7067653e9b392cddcb37ac26577d3d42ce2aa4f73c744\"" Feb 13 19:03:39.027526 systemd[1]: Started cri-containerd-a2ba52771cf785d99fd7067653e9b392cddcb37ac26577d3d42ce2aa4f73c744.scope - libcontainer container a2ba52771cf785d99fd7067653e9b392cddcb37ac26577d3d42ce2aa4f73c744. Feb 13 19:03:39.081485 containerd[1926]: time="2025-02-13T19:03:39.081298653Z" level=info msg="StartContainer for \"a2ba52771cf785d99fd7067653e9b392cddcb37ac26577d3d42ce2aa4f73c744\" returns successfully" Feb 13 19:03:39.151618 kubelet[2411]: I0213 19:03:39.151536 2411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-l4mms" podStartSLOduration=20.974618617 podStartE2EDuration="29.151515955s" podCreationTimestamp="2025-02-13 19:03:10 +0000 UTC" firstStartedPulling="2025-02-13 19:03:30.765465565 +0000 UTC m=+21.332807289" lastFinishedPulling="2025-02-13 19:03:38.942362903 +0000 UTC m=+29.509704627" observedRunningTime="2025-02-13 19:03:39.150943193 +0000 UTC m=+29.718284917" watchObservedRunningTime="2025-02-13 19:03:39.151515955 +0000 UTC m=+29.718857691" Feb 13 19:03:39.692151 kubelet[2411]: E0213 19:03:39.692087 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:39.857267 kubelet[2411]: I0213 19:03:39.857206 2411 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:03:39.857267 kubelet[2411]: I0213 19:03:39.857274 2411 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:03:40.692747 kubelet[2411]: E0213 19:03:40.692689 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:41.135014 kubelet[2411]: I0213 19:03:41.134550 2411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:03:41.318560 systemd[1]: run-containerd-runc-k8s.io-c9b012c2e9be026aa94e9698754852c6bd9c86d3952c68186a4334357eddfab1-runc.ktcmI4.mount: Deactivated successfully. Feb 13 19:03:41.693445 kubelet[2411]: E0213 19:03:41.693385 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:42.693925 kubelet[2411]: E0213 19:03:42.693864 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:43.694888 kubelet[2411]: E0213 19:03:43.694824 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:44.695756 kubelet[2411]: E0213 19:03:44.695685 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:45.696905 kubelet[2411]: E0213 19:03:45.696842 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:45.958746 systemd[1]: Created slice kubepods-besteffort-podb5cbc154_e669_4173_9daa_baa0d90a1c12.slice - libcontainer container kubepods-besteffort-podb5cbc154_e669_4173_9daa_baa0d90a1c12.slice. Feb 13 19:03:46.056868 kubelet[2411]: I0213 19:03:46.056703 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/b5cbc154-e669-4173-9daa-baa0d90a1c12-data\") pod \"nfs-server-provisioner-0\" (UID: \"b5cbc154-e669-4173-9daa-baa0d90a1c12\") " pod="default/nfs-server-provisioner-0" Feb 13 19:03:46.056868 kubelet[2411]: I0213 19:03:46.056774 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gmlf\" (UniqueName: \"kubernetes.io/projected/b5cbc154-e669-4173-9daa-baa0d90a1c12-kube-api-access-8gmlf\") pod \"nfs-server-provisioner-0\" (UID: \"b5cbc154-e669-4173-9daa-baa0d90a1c12\") " pod="default/nfs-server-provisioner-0" Feb 13 19:03:46.264410 containerd[1926]: time="2025-02-13T19:03:46.264209358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b5cbc154-e669-4173-9daa-baa0d90a1c12,Namespace:default,Attempt:0,}" Feb 13 19:03:46.515488 systemd-networkd[1831]: cali60e51b789ff: Link UP Feb 13 19:03:46.515913 systemd-networkd[1831]: cali60e51b789ff: Gained carrier Feb 13 19:03:46.519488 (udev-worker)[4265]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:03:46.546536 containerd[1926]: 2025-02-13 19:03:46.366 [INFO][4247] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.23.196-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default b5cbc154-e669-4173-9daa-baa0d90a1c12 1239 0 2025-02-13 19:03:45 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.23.196 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.196-k8s-nfs--server--provisioner--0-" Feb 13 19:03:46.546536 containerd[1926]: 2025-02-13 19:03:46.366 [INFO][4247] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.196-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:03:46.546536 containerd[1926]: 2025-02-13 19:03:46.409 [INFO][4257] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa" HandleID="k8s-pod-network.9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa" Workload="172.31.23.196-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:03:46.546536 containerd[1926]: 2025-02-13 19:03:46.433 [INFO][4257] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa" HandleID="k8s-pod-network.9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa" Workload="172.31.23.196-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cbf0), Attrs:map[string]string{"namespace":"default", "node":"172.31.23.196", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 19:03:46.409703008 +0000 UTC"}, Hostname:"172.31.23.196", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:03:46.546536 containerd[1926]: 2025-02-13 19:03:46.433 [INFO][4257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:03:46.546536 containerd[1926]: 2025-02-13 19:03:46.434 [INFO][4257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:03:46.546536 containerd[1926]: 2025-02-13 19:03:46.434 [INFO][4257] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.23.196' Feb 13 19:03:46.546536 containerd[1926]: 2025-02-13 19:03:46.438 [INFO][4257] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa" host="172.31.23.196" Feb 13 19:03:46.546536 containerd[1926]: 2025-02-13 19:03:46.447 [INFO][4257] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.23.196" Feb 13 19:03:46.546536 containerd[1926]: 2025-02-13 19:03:46.462 [INFO][4257] ipam/ipam.go 489: Trying affinity for 192.168.81.0/26 host="172.31.23.196" Feb 13 19:03:46.546536 containerd[1926]: 2025-02-13 19:03:46.465 [INFO][4257] ipam/ipam.go 155: Attempting to load block cidr=192.168.81.0/26 host="172.31.23.196" Feb 13 19:03:46.546536 containerd[1926]: 2025-02-13 19:03:46.471 [INFO][4257] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.81.0/26 host="172.31.23.196" Feb 13 19:03:46.546536 containerd[1926]: 2025-02-13 19:03:46.471 [INFO][4257] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.81.0/26 handle="k8s-pod-network.9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa" host="172.31.23.196" Feb 13 19:03:46.546536 containerd[1926]: 2025-02-13 19:03:46.473 [INFO][4257] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa Feb 13 19:03:46.546536 containerd[1926]: 2025-02-13 19:03:46.484 [INFO][4257] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.81.0/26 handle="k8s-pod-network.9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa" host="172.31.23.196" Feb 13 19:03:46.546536 containerd[1926]: 2025-02-13 19:03:46.501 [INFO][4257] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.81.3/26] block=192.168.81.0/26 handle="k8s-pod-network.9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa" host="172.31.23.196" Feb 13 19:03:46.546536 containerd[1926]: 2025-02-13 19:03:46.501 [INFO][4257] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.81.3/26] handle="k8s-pod-network.9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa" host="172.31.23.196" Feb 13 19:03:46.546536 containerd[1926]: 2025-02-13 19:03:46.501 [INFO][4257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:03:46.546536 containerd[1926]: 2025-02-13 19:03:46.501 [INFO][4257] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.3/26] IPv6=[] ContainerID="9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa" HandleID="k8s-pod-network.9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa" Workload="172.31.23.196-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:03:46.547819 containerd[1926]: 2025-02-13 19:03:46.507 [INFO][4247] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.196-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.196-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"b5cbc154-e669-4173-9daa-baa0d90a1c12", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 3, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.196", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.81.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:03:46.547819 containerd[1926]: 2025-02-13 19:03:46.507 [INFO][4247] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.81.3/32] ContainerID="9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.196-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:03:46.547819 containerd[1926]: 2025-02-13 19:03:46.507 [INFO][4247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.196-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:03:46.547819 containerd[1926]: 2025-02-13 19:03:46.515 [INFO][4247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.196-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:03:46.548155 containerd[1926]: 2025-02-13 19:03:46.517 [INFO][4247] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.196-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.196-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"b5cbc154-e669-4173-9daa-baa0d90a1c12", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 3, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.196", ContainerID:"9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.81.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"a2:82:4d:84:a4:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:03:46.548155 containerd[1926]: 2025-02-13 19:03:46.544 [INFO][4247] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.196-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:03:46.589490 containerd[1926]: time="2025-02-13T19:03:46.589266909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:46.589835 containerd[1926]: time="2025-02-13T19:03:46.589628049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:46.589835 containerd[1926]: time="2025-02-13T19:03:46.589677860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:46.591332 containerd[1926]: time="2025-02-13T19:03:46.591100276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:46.633568 systemd[1]: Started cri-containerd-9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa.scope - libcontainer container 9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa. Feb 13 19:03:46.694121 containerd[1926]: time="2025-02-13T19:03:46.694040760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b5cbc154-e669-4173-9daa-baa0d90a1c12,Namespace:default,Attempt:0,} returns sandbox id \"9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa\"" Feb 13 19:03:46.697050 kubelet[2411]: E0213 19:03:46.697008 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:46.697840 containerd[1926]: time="2025-02-13T19:03:46.697789533Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:03:47.529110 systemd-networkd[1831]: cali60e51b789ff: Gained IPv6LL Feb 13 19:03:47.697811 kubelet[2411]: E0213 19:03:47.697745 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:48.698749 kubelet[2411]: E0213 19:03:48.698659 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:49.272416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2790330780.mount: Deactivated successfully. Feb 13 19:03:49.699494 kubelet[2411]: E0213 19:03:49.699450 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:50.080113 ntpd[1905]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 19:03:50.081341 ntpd[1905]: 13 Feb 19:03:50 ntpd[1905]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 19:03:50.669415 kubelet[2411]: E0213 19:03:50.669371 2411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:50.701104 kubelet[2411]: E0213 19:03:50.700707 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:51.702047 kubelet[2411]: E0213 19:03:51.701935 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:51.959969 containerd[1926]: time="2025-02-13T19:03:51.959827466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:51.962321 containerd[1926]: time="2025-02-13T19:03:51.962104970Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Feb 13 19:03:51.962469 containerd[1926]: time="2025-02-13T19:03:51.962442625Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:51.968994 containerd[1926]: time="2025-02-13T19:03:51.968897364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:51.976680 containerd[1926]: time="2025-02-13T19:03:51.974896499Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.277041695s" Feb 13 19:03:51.976680 containerd[1926]: time="2025-02-13T19:03:51.974982160Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 13 19:03:51.983414 containerd[1926]: time="2025-02-13T19:03:51.983364071Z" level=info msg="CreateContainer within sandbox \"9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:03:52.002733 containerd[1926]: time="2025-02-13T19:03:52.002679293Z" level=info msg="CreateContainer within sandbox \"9b7286e3efd5019e5c940c4722e84cb086005f866929502ae3605b929f4093aa\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"3f7045f6cd533f39903d57640e27cd7047250f878c4739131de47f59f668003a\"" Feb 13 19:03:52.003933 containerd[1926]: time="2025-02-13T19:03:52.003844870Z" level=info msg="StartContainer for \"3f7045f6cd533f39903d57640e27cd7047250f878c4739131de47f59f668003a\"" Feb 13 19:03:52.065543 systemd[1]: Started cri-containerd-3f7045f6cd533f39903d57640e27cd7047250f878c4739131de47f59f668003a.scope - libcontainer container 3f7045f6cd533f39903d57640e27cd7047250f878c4739131de47f59f668003a. Feb 13 19:03:52.106794 containerd[1926]: time="2025-02-13T19:03:52.106722290Z" level=info msg="StartContainer for \"3f7045f6cd533f39903d57640e27cd7047250f878c4739131de47f59f668003a\" returns successfully" Feb 13 19:03:52.191950 kubelet[2411]: I0213 19:03:52.191795 2411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.909636065 podStartE2EDuration="7.191751186s" podCreationTimestamp="2025-02-13 19:03:45 +0000 UTC" firstStartedPulling="2025-02-13 19:03:46.69671637 +0000 UTC m=+37.264058094" lastFinishedPulling="2025-02-13 19:03:51.978831503 +0000 UTC m=+42.546173215" observedRunningTime="2025-02-13 19:03:52.191487378 +0000 UTC m=+42.758829126" watchObservedRunningTime="2025-02-13 19:03:52.191751186 +0000 UTC m=+42.759092922" Feb 13 19:03:52.702384 kubelet[2411]: E0213 19:03:52.702319 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:53.703013 kubelet[2411]: E0213 19:03:53.702952 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:54.703739 kubelet[2411]: E0213 19:03:54.703677 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:55.704745 kubelet[2411]: E0213 19:03:55.704684 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:56.705368 kubelet[2411]: E0213 19:03:56.705305 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:57.705921 kubelet[2411]: E0213 19:03:57.705859 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:58.706427 kubelet[2411]: E0213 19:03:58.706356 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:59.707253 kubelet[2411]: E0213 19:03:59.707169 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:00.707876 kubelet[2411]: E0213 19:04:00.707815 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:01.708938 kubelet[2411]: E0213 19:04:01.708881 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:02.709276 kubelet[2411]: E0213 19:04:02.709181 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:03.710080 kubelet[2411]: E0213 19:04:03.710018 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:04.710902 kubelet[2411]: E0213 19:04:04.710852 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:05.711658 kubelet[2411]: E0213 19:04:05.711555 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:06.712801 kubelet[2411]: E0213 19:04:06.712731 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:07.713666 kubelet[2411]: E0213 19:04:07.713606 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:08.714586 kubelet[2411]: E0213 19:04:08.714520 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:09.715607 kubelet[2411]: E0213 19:04:09.715540 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:10.668542 kubelet[2411]: E0213 19:04:10.668469 2411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:10.701115 containerd[1926]: time="2025-02-13T19:04:10.700837241Z" level=info msg="StopPodSandbox for \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\"" Feb 13 19:04:10.701115 containerd[1926]: time="2025-02-13T19:04:10.701003789Z" level=info msg="TearDown network for sandbox \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\" successfully" Feb 13 19:04:10.701115 containerd[1926]: time="2025-02-13T19:04:10.701024779Z" level=info msg="StopPodSandbox for \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\" returns successfully" Feb 13 19:04:10.702838 containerd[1926]: time="2025-02-13T19:04:10.702772425Z" level=info msg="RemovePodSandbox for \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\"" Feb 13 19:04:10.702968 containerd[1926]: time="2025-02-13T19:04:10.702850973Z" level=info msg="Forcibly stopping sandbox \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\"" Feb 13 19:04:10.703078 containerd[1926]: time="2025-02-13T19:04:10.703045204Z" level=info msg="TearDown network for sandbox \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\" successfully" Feb 13 19:04:10.708127 containerd[1926]: time="2025-02-13T19:04:10.708018001Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:04:10.708301 containerd[1926]: time="2025-02-13T19:04:10.708151410Z" level=info msg="RemovePodSandbox \"e9602ebb626fca885cf337192e46fd08c84815ae6eefb3d5eb2e23d9bfcdcda8\" returns successfully" Feb 13 19:04:10.709137 containerd[1926]: time="2025-02-13T19:04:10.709088765Z" level=info msg="StopPodSandbox for \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\"" Feb 13 19:04:10.709344 containerd[1926]: time="2025-02-13T19:04:10.709294306Z" level=info msg="TearDown network for sandbox \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\" successfully" Feb 13 19:04:10.709344 containerd[1926]: time="2025-02-13T19:04:10.709327098Z" level=info msg="StopPodSandbox for \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\" returns successfully" Feb 13 19:04:10.709917 containerd[1926]: time="2025-02-13T19:04:10.709799745Z" level=info msg="RemovePodSandbox for \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\"" Feb 13 19:04:10.709917 containerd[1926]: time="2025-02-13T19:04:10.709850300Z" level=info msg="Forcibly stopping sandbox \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\"" Feb 13 19:04:10.710078 containerd[1926]: time="2025-02-13T19:04:10.709968129Z" level=info msg="TearDown network for sandbox \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\" successfully" Feb 13 19:04:10.713203 containerd[1926]: time="2025-02-13T19:04:10.713122959Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:04:10.713203 containerd[1926]: time="2025-02-13T19:04:10.713204674Z" level=info msg="RemovePodSandbox \"02c7f8ed37971be2189c72b7b8664f3b3bf0395d0741be45a880db300b6a9038\" returns successfully" Feb 13 19:04:10.714160 containerd[1926]: time="2025-02-13T19:04:10.714116302Z" level=info msg="StopPodSandbox for \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\"" Feb 13 19:04:10.714366 containerd[1926]: time="2025-02-13T19:04:10.714311372Z" level=info msg="TearDown network for sandbox \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\" successfully" Feb 13 19:04:10.714366 containerd[1926]: time="2025-02-13T19:04:10.714343972Z" level=info msg="StopPodSandbox for \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\" returns successfully" Feb 13 19:04:10.715259 containerd[1926]: time="2025-02-13T19:04:10.714919912Z" level=info msg="RemovePodSandbox for \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\"" Feb 13 19:04:10.715259 containerd[1926]: time="2025-02-13T19:04:10.714964110Z" level=info msg="Forcibly stopping sandbox \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\"" Feb 13 19:04:10.715259 containerd[1926]: time="2025-02-13T19:04:10.715084565Z" level=info msg="TearDown network for sandbox \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\" successfully" Feb 13 19:04:10.716664 kubelet[2411]: E0213 19:04:10.716619 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:10.719430 containerd[1926]: time="2025-02-13T19:04:10.719347377Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:04:10.719430 containerd[1926]: time="2025-02-13T19:04:10.719419737Z" level=info msg="RemovePodSandbox \"c137bcc8d9e5af2da5cb410ac6dcd545139a915d0133755679a3517750c24928\" returns successfully" Feb 13 19:04:10.720351 containerd[1926]: time="2025-02-13T19:04:10.720269188Z" level=info msg="StopPodSandbox for \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\"" Feb 13 19:04:10.720477 containerd[1926]: time="2025-02-13T19:04:10.720429152Z" level=info msg="TearDown network for sandbox \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\" successfully" Feb 13 19:04:10.720477 containerd[1926]: time="2025-02-13T19:04:10.720454111Z" level=info msg="StopPodSandbox for \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\" returns successfully" Feb 13 19:04:10.720966 containerd[1926]: time="2025-02-13T19:04:10.720909452Z" level=info msg="RemovePodSandbox for \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\"" Feb 13 19:04:10.720966 containerd[1926]: time="2025-02-13T19:04:10.720958447Z" level=info msg="Forcibly stopping sandbox \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\"" Feb 13 19:04:10.721102 containerd[1926]: time="2025-02-13T19:04:10.721083005Z" level=info msg="TearDown network for sandbox \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\" successfully" Feb 13 19:04:10.724390 containerd[1926]: time="2025-02-13T19:04:10.724314656Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:04:10.725329 containerd[1926]: time="2025-02-13T19:04:10.724391490Z" level=info msg="RemovePodSandbox \"bb28753c0804a02596aefe97183bd87fb58dcf867c4eea7c953892d5381cbebf\" returns successfully" Feb 13 19:04:10.725329 containerd[1926]: time="2025-02-13T19:04:10.724964011Z" level=info msg="StopPodSandbox for \"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\"" Feb 13 19:04:10.725329 containerd[1926]: time="2025-02-13T19:04:10.725122836Z" level=info msg="TearDown network for sandbox \"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\" successfully" Feb 13 19:04:10.725329 containerd[1926]: time="2025-02-13T19:04:10.725143753Z" level=info msg="StopPodSandbox for \"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\" returns successfully" Feb 13 19:04:10.725726 containerd[1926]: time="2025-02-13T19:04:10.725583357Z" level=info msg="RemovePodSandbox for \"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\"" Feb 13 19:04:10.725726 containerd[1926]: time="2025-02-13T19:04:10.725632677Z" level=info msg="Forcibly stopping sandbox \"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\"" Feb 13 19:04:10.725935 containerd[1926]: time="2025-02-13T19:04:10.725751178Z" level=info msg="TearDown network for sandbox \"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\" successfully" Feb 13 19:04:10.731644 containerd[1926]: time="2025-02-13T19:04:10.730783212Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:04:10.731644 containerd[1926]: time="2025-02-13T19:04:10.730904712Z" level=info msg="RemovePodSandbox \"f97b579e07e74650d68d3c3bf22582b49e75f9e7bec4d513c33374b486e860ee\" returns successfully" Feb 13 19:04:10.735415 containerd[1926]: time="2025-02-13T19:04:10.735347697Z" level=info msg="StopPodSandbox for \"f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15\"" Feb 13 19:04:10.735569 containerd[1926]: time="2025-02-13T19:04:10.735542408Z" level=info msg="TearDown network for sandbox \"f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15\" successfully" Feb 13 19:04:10.735635 containerd[1926]: time="2025-02-13T19:04:10.735566539Z" level=info msg="StopPodSandbox for \"f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15\" returns successfully" Feb 13 19:04:10.737040 containerd[1926]: time="2025-02-13T19:04:10.736444524Z" level=info msg="RemovePodSandbox for \"f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15\"" Feb 13 19:04:10.737040 containerd[1926]: time="2025-02-13T19:04:10.736494923Z" level=info msg="Forcibly stopping sandbox \"f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15\"" Feb 13 19:04:10.737040 containerd[1926]: time="2025-02-13T19:04:10.736621184Z" level=info msg="TearDown network for sandbox \"f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15\" successfully" Feb 13 19:04:10.740866 containerd[1926]: time="2025-02-13T19:04:10.740816757Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:04:10.741208 containerd[1926]: time="2025-02-13T19:04:10.741073812Z" level=info msg="RemovePodSandbox \"f1f61dec3ad0112d3d5bded7dfe9ea7fd4905061fd2b4b368e9a505fb09aaf15\" returns successfully" Feb 13 19:04:10.742329 containerd[1926]: time="2025-02-13T19:04:10.741841716Z" level=info msg="StopPodSandbox for \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\"" Feb 13 19:04:10.742329 containerd[1926]: time="2025-02-13T19:04:10.742007641Z" level=info msg="TearDown network for sandbox \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\" successfully" Feb 13 19:04:10.742329 containerd[1926]: time="2025-02-13T19:04:10.742028103Z" level=info msg="StopPodSandbox for \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\" returns successfully" Feb 13 19:04:10.743169 containerd[1926]: time="2025-02-13T19:04:10.742854694Z" level=info msg="RemovePodSandbox for \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\"" Feb 13 19:04:10.743169 containerd[1926]: time="2025-02-13T19:04:10.742917146Z" level=info msg="Forcibly stopping sandbox \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\"" Feb 13 19:04:10.743169 containerd[1926]: time="2025-02-13T19:04:10.743043947Z" level=info msg="TearDown network for sandbox \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\" successfully" Feb 13 19:04:10.746207 containerd[1926]: time="2025-02-13T19:04:10.746134597Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:04:10.746366 containerd[1926]: time="2025-02-13T19:04:10.746269385Z" level=info msg="RemovePodSandbox \"985dff1f3b56948cc00ce7dc072b1a94a24715666ff1b7325e449307562e025d\" returns successfully" Feb 13 19:04:10.747428 containerd[1926]: time="2025-02-13T19:04:10.746925481Z" level=info msg="StopPodSandbox for \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\"" Feb 13 19:04:10.747428 containerd[1926]: time="2025-02-13T19:04:10.747084198Z" level=info msg="TearDown network for sandbox \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\" successfully" Feb 13 19:04:10.747428 containerd[1926]: time="2025-02-13T19:04:10.747105307Z" level=info msg="StopPodSandbox for \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\" returns successfully" Feb 13 19:04:10.747732 containerd[1926]: time="2025-02-13T19:04:10.747684761Z" level=info msg="RemovePodSandbox for \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\"" Feb 13 19:04:10.747795 containerd[1926]: time="2025-02-13T19:04:10.747736000Z" level=info msg="Forcibly stopping sandbox \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\"" Feb 13 19:04:10.747912 containerd[1926]: time="2025-02-13T19:04:10.747858686Z" level=info msg="TearDown network for sandbox \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\" successfully" Feb 13 19:04:10.751105 containerd[1926]: time="2025-02-13T19:04:10.751013720Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:04:10.751105 containerd[1926]: time="2025-02-13T19:04:10.751100665Z" level=info msg="RemovePodSandbox \"c263cfff6a934b75bc7ed638ebbfa79623d532cf3de099f46d1ac8a3f5b08021\" returns successfully" Feb 13 19:04:10.753455 containerd[1926]: time="2025-02-13T19:04:10.751826973Z" level=info msg="StopPodSandbox for \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\"" Feb 13 19:04:10.753455 containerd[1926]: time="2025-02-13T19:04:10.752000298Z" level=info msg="TearDown network for sandbox \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\" successfully" Feb 13 19:04:10.753455 containerd[1926]: time="2025-02-13T19:04:10.752022403Z" level=info msg="StopPodSandbox for \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\" returns successfully" Feb 13 19:04:10.753455 containerd[1926]: time="2025-02-13T19:04:10.752885144Z" level=info msg="RemovePodSandbox for \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\"" Feb 13 19:04:10.753455 containerd[1926]: time="2025-02-13T19:04:10.752952178Z" level=info msg="Forcibly stopping sandbox \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\"" Feb 13 19:04:10.753455 containerd[1926]: time="2025-02-13T19:04:10.753135339Z" level=info msg="TearDown network for sandbox \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\" successfully" Feb 13 19:04:10.756554 containerd[1926]: time="2025-02-13T19:04:10.756457521Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:04:10.756684 containerd[1926]: time="2025-02-13T19:04:10.756604987Z" level=info msg="RemovePodSandbox \"4d09d53e41ff4fbb806f8d61addaaf05a4393d2a3acc614bc4fd060a63b8eaa7\" returns successfully" Feb 13 19:04:10.757329 containerd[1926]: time="2025-02-13T19:04:10.757207626Z" level=info msg="StopPodSandbox for \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\"" Feb 13 19:04:10.757556 containerd[1926]: time="2025-02-13T19:04:10.757445131Z" level=info msg="TearDown network for sandbox \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\" successfully" Feb 13 19:04:10.757652 containerd[1926]: time="2025-02-13T19:04:10.757610564Z" level=info msg="StopPodSandbox for \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\" returns successfully" Feb 13 19:04:10.758134 containerd[1926]: time="2025-02-13T19:04:10.758094306Z" level=info msg="RemovePodSandbox for \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\"" Feb 13 19:04:10.758134 containerd[1926]: time="2025-02-13T19:04:10.758141251Z" level=info msg="Forcibly stopping sandbox \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\"" Feb 13 19:04:10.758350 containerd[1926]: time="2025-02-13T19:04:10.758307044Z" level=info msg="TearDown network for sandbox \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\" successfully" Feb 13 19:04:10.761833 containerd[1926]: time="2025-02-13T19:04:10.761738899Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:04:10.761833 containerd[1926]: time="2025-02-13T19:04:10.761829861Z" level=info msg="RemovePodSandbox \"f78d1195e26a648189741af2ccd6e384bb5833ae0403534c17218cf3c2e20aaf\" returns successfully" Feb 13 19:04:10.762697 containerd[1926]: time="2025-02-13T19:04:10.762654569Z" level=info msg="StopPodSandbox for \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\"" Feb 13 19:04:10.762857 containerd[1926]: time="2025-02-13T19:04:10.762810599Z" level=info msg="TearDown network for sandbox \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\" successfully" Feb 13 19:04:10.762857 containerd[1926]: time="2025-02-13T19:04:10.762846689Z" level=info msg="StopPodSandbox for \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\" returns successfully" Feb 13 19:04:10.764999 containerd[1926]: time="2025-02-13T19:04:10.763618982Z" level=info msg="RemovePodSandbox for \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\"" Feb 13 19:04:10.764999 containerd[1926]: time="2025-02-13T19:04:10.763662329Z" level=info msg="Forcibly stopping sandbox \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\"" Feb 13 19:04:10.764999 containerd[1926]: time="2025-02-13T19:04:10.763780242Z" level=info msg="TearDown network for sandbox \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\" successfully" Feb 13 19:04:10.766930 containerd[1926]: time="2025-02-13T19:04:10.766886916Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:04:10.767101 containerd[1926]: time="2025-02-13T19:04:10.767072331Z" level=info msg="RemovePodSandbox \"b883a98edfc20a59c6e696f7adb8662fbbdfec3386ba86423abd7e271ab21565\" returns successfully" Feb 13 19:04:10.767885 containerd[1926]: time="2025-02-13T19:04:10.767795869Z" level=info msg="StopPodSandbox for \"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\"" Feb 13 19:04:10.768172 containerd[1926]: time="2025-02-13T19:04:10.768142987Z" level=info msg="TearDown network for sandbox \"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\" successfully" Feb 13 19:04:10.768317 containerd[1926]: time="2025-02-13T19:04:10.768291197Z" level=info msg="StopPodSandbox for \"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\" returns successfully" Feb 13 19:04:10.769021 containerd[1926]: time="2025-02-13T19:04:10.768918459Z" level=info msg="RemovePodSandbox for \"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\"" Feb 13 19:04:10.769113 containerd[1926]: time="2025-02-13T19:04:10.769025434Z" level=info msg="Forcibly stopping sandbox \"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\"" Feb 13 19:04:10.769178 containerd[1926]: time="2025-02-13T19:04:10.769148120Z" level=info msg="TearDown network for sandbox \"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\" successfully" Feb 13 19:04:10.772315 containerd[1926]: time="2025-02-13T19:04:10.772255430Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:04:10.772735 containerd[1926]: time="2025-02-13T19:04:10.772331448Z" level=info msg="RemovePodSandbox \"1fc169c4d7a476ee456d6b59878c818429b6555519f10ce71230e193a89df27a\" returns successfully" Feb 13 19:04:10.773031 containerd[1926]: time="2025-02-13T19:04:10.772994920Z" level=info msg="StopPodSandbox for \"155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed\"" Feb 13 19:04:10.773395 containerd[1926]: time="2025-02-13T19:04:10.773270014Z" level=info msg="TearDown network for sandbox \"155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed\" successfully" Feb 13 19:04:10.773395 containerd[1926]: time="2025-02-13T19:04:10.773297097Z" level=info msg="StopPodSandbox for \"155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed\" returns successfully" Feb 13 19:04:10.773778 containerd[1926]: time="2025-02-13T19:04:10.773722476Z" level=info msg="RemovePodSandbox for \"155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed\"" Feb 13 19:04:10.773879 containerd[1926]: time="2025-02-13T19:04:10.773773750Z" level=info msg="Forcibly stopping sandbox \"155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed\"" Feb 13 19:04:10.773951 containerd[1926]: time="2025-02-13T19:04:10.773895801Z" level=info msg="TearDown network for sandbox \"155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed\" successfully" Feb 13 19:04:10.776912 containerd[1926]: time="2025-02-13T19:04:10.776844046Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:04:10.777157 containerd[1926]: time="2025-02-13T19:04:10.776922403Z" level=info msg="RemovePodSandbox \"155051b6da0b3d286bd9c89da13c539a1524f299c54024802a8e777c6041efed\" returns successfully" Feb 13 19:04:11.717830 kubelet[2411]: E0213 19:04:11.717771 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:12.718376 kubelet[2411]: E0213 19:04:12.718314 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:13.719526 kubelet[2411]: E0213 19:04:13.719467 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:14.720454 kubelet[2411]: E0213 19:04:14.720395 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:15.720996 kubelet[2411]: E0213 19:04:15.720950 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:16.721484 kubelet[2411]: E0213 19:04:16.721419 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:17.159062 systemd[1]: Created slice kubepods-besteffort-pod1ab2b719_5761_4036_a51f_7132886bb0e8.slice - libcontainer container kubepods-besteffort-pod1ab2b719_5761_4036_a51f_7132886bb0e8.slice. Feb 13 19:04:17.249918 kubelet[2411]: I0213 19:04:17.249859 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-debe5ae9-54fa-439f-8044-e098f025e825\" (UniqueName: \"kubernetes.io/nfs/1ab2b719-5761-4036-a51f-7132886bb0e8-pvc-debe5ae9-54fa-439f-8044-e098f025e825\") pod \"test-pod-1\" (UID: \"1ab2b719-5761-4036-a51f-7132886bb0e8\") " pod="default/test-pod-1" Feb 13 19:04:17.250100 kubelet[2411]: I0213 19:04:17.249927 2411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vthlc\" (UniqueName: \"kubernetes.io/projected/1ab2b719-5761-4036-a51f-7132886bb0e8-kube-api-access-vthlc\") pod \"test-pod-1\" (UID: \"1ab2b719-5761-4036-a51f-7132886bb0e8\") " pod="default/test-pod-1" Feb 13 19:04:17.392525 kernel: FS-Cache: Loaded Feb 13 19:04:17.434654 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:04:17.434754 kernel: RPC: Registered udp transport module. Feb 13 19:04:17.434816 kernel: RPC: Registered tcp transport module. Feb 13 19:04:17.436577 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:04:17.436660 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:04:17.722411 kubelet[2411]: E0213 19:04:17.722072 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:17.771381 kernel: NFS: Registering the id_resolver key type Feb 13 19:04:17.771495 kernel: Key type id_resolver registered Feb 13 19:04:17.771528 kernel: Key type id_legacy registered Feb 13 19:04:17.807771 nfsidmap[4485]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 19:04:17.813556 nfsidmap[4487]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 19:04:18.064572 containerd[1926]: time="2025-02-13T19:04:18.064286348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1ab2b719-5761-4036-a51f-7132886bb0e8,Namespace:default,Attempt:0,}" Feb 13 19:04:18.253339 (udev-worker)[4482]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:04:18.255911 systemd-networkd[1831]: cali5ec59c6bf6e: Link UP Feb 13 19:04:18.258419 systemd-networkd[1831]: cali5ec59c6bf6e: Gained carrier Feb 13 19:04:18.278108 containerd[1926]: 2025-02-13 19:04:18.139 [INFO][4488] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.23.196-k8s-test--pod--1-eth0 default 1ab2b719-5761-4036-a51f-7132886bb0e8 1332 0 2025-02-13 19:03:46 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.23.196 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.196-k8s-test--pod--1-" Feb 13 19:04:18.278108 containerd[1926]: 2025-02-13 19:04:18.140 [INFO][4488] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.196-k8s-test--pod--1-eth0" Feb 13 19:04:18.278108 containerd[1926]: 2025-02-13 19:04:18.187 [INFO][4499] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac" HandleID="k8s-pod-network.fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac" Workload="172.31.23.196-k8s-test--pod--1-eth0" Feb 13 19:04:18.278108 containerd[1926]: 2025-02-13 19:04:18.204 [INFO][4499] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac" HandleID="k8s-pod-network.fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac" Workload="172.31.23.196-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000335f20), Attrs:map[string]string{"namespace":"default", "node":"172.31.23.196", "pod":"test-pod-1", "timestamp":"2025-02-13 19:04:18.187660111 +0000 UTC"}, Hostname:"172.31.23.196", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:04:18.278108 containerd[1926]: 2025-02-13 19:04:18.204 [INFO][4499] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:04:18.278108 containerd[1926]: 2025-02-13 19:04:18.204 [INFO][4499] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:04:18.278108 containerd[1926]: 2025-02-13 19:04:18.205 [INFO][4499] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.23.196' Feb 13 19:04:18.278108 containerd[1926]: 2025-02-13 19:04:18.207 [INFO][4499] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac" host="172.31.23.196" Feb 13 19:04:18.278108 containerd[1926]: 2025-02-13 19:04:18.213 [INFO][4499] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.23.196" Feb 13 19:04:18.278108 containerd[1926]: 2025-02-13 19:04:18.220 [INFO][4499] ipam/ipam.go 489: Trying affinity for 192.168.81.0/26 host="172.31.23.196" Feb 13 19:04:18.278108 containerd[1926]: 2025-02-13 19:04:18.223 [INFO][4499] ipam/ipam.go 155: Attempting to load block cidr=192.168.81.0/26 host="172.31.23.196" Feb 13 19:04:18.278108 containerd[1926]: 2025-02-13 19:04:18.227 [INFO][4499] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.81.0/26 host="172.31.23.196" Feb 13 19:04:18.278108 containerd[1926]: 2025-02-13 19:04:18.227 [INFO][4499] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.81.0/26 handle="k8s-pod-network.fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac" host="172.31.23.196" Feb 13 19:04:18.278108 containerd[1926]: 2025-02-13 19:04:18.229 [INFO][4499] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac Feb 13 19:04:18.278108 containerd[1926]: 2025-02-13 19:04:18.238 [INFO][4499] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.81.0/26 handle="k8s-pod-network.fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac" host="172.31.23.196" Feb 13 19:04:18.278108 containerd[1926]: 2025-02-13 19:04:18.247 [INFO][4499] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.81.4/26] block=192.168.81.0/26 handle="k8s-pod-network.fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac" host="172.31.23.196" Feb 13 19:04:18.278108 containerd[1926]: 2025-02-13 19:04:18.247 [INFO][4499] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.81.4/26] handle="k8s-pod-network.fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac" host="172.31.23.196" Feb 13 19:04:18.278108 containerd[1926]: 2025-02-13 19:04:18.247 [INFO][4499] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:04:18.278108 containerd[1926]: 2025-02-13 19:04:18.248 [INFO][4499] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.4/26] IPv6=[] ContainerID="fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac" HandleID="k8s-pod-network.fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac" Workload="172.31.23.196-k8s-test--pod--1-eth0" Feb 13 19:04:18.278108 containerd[1926]: 2025-02-13 19:04:18.250 [INFO][4488] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.196-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.196-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"1ab2b719-5761-4036-a51f-7132886bb0e8", ResourceVersion:"1332", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 3, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.196", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.81.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:04:18.282959 containerd[1926]: 2025-02-13 19:04:18.250 [INFO][4488] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.81.4/32] ContainerID="fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.196-k8s-test--pod--1-eth0" Feb 13 19:04:18.282959 containerd[1926]: 2025-02-13 19:04:18.250 [INFO][4488] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.196-k8s-test--pod--1-eth0" Feb 13 19:04:18.282959 containerd[1926]: 2025-02-13 19:04:18.259 [INFO][4488] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.196-k8s-test--pod--1-eth0" Feb 13 19:04:18.282959 containerd[1926]: 2025-02-13 19:04:18.262 [INFO][4488] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.196-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.196-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"1ab2b719-5761-4036-a51f-7132886bb0e8", ResourceVersion:"1332", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 3, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.196", ContainerID:"fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.81.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"8a:da:d7:31:f4:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:04:18.282959 containerd[1926]: 2025-02-13 19:04:18.273 [INFO][4488] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.196-k8s-test--pod--1-eth0" Feb 13 19:04:18.315464 containerd[1926]: time="2025-02-13T19:04:18.315060067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:18.315464 containerd[1926]: time="2025-02-13T19:04:18.315166502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:18.315464 containerd[1926]: time="2025-02-13T19:04:18.315195276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:18.317095 containerd[1926]: time="2025-02-13T19:04:18.316616841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:18.348541 systemd[1]: Started cri-containerd-fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac.scope - libcontainer container fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac. Feb 13 19:04:18.417413 containerd[1926]: time="2025-02-13T19:04:18.417360252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1ab2b719-5761-4036-a51f-7132886bb0e8,Namespace:default,Attempt:0,} returns sandbox id \"fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac\"" Feb 13 19:04:18.420648 containerd[1926]: time="2025-02-13T19:04:18.420596389Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:04:18.723236 kubelet[2411]: E0213 19:04:18.723147 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:18.793155 containerd[1926]: time="2025-02-13T19:04:18.792941197Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:18.794060 containerd[1926]: time="2025-02-13T19:04:18.793989436Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:04:18.800357 containerd[1926]: time="2025-02-13T19:04:18.800182358Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 379.509531ms" Feb 13 19:04:18.800357 containerd[1926]: time="2025-02-13T19:04:18.800256589Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:04:18.803438 containerd[1926]: time="2025-02-13T19:04:18.803380930Z" level=info msg="CreateContainer within sandbox \"fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:04:18.825944 containerd[1926]: time="2025-02-13T19:04:18.825869716Z" level=info msg="CreateContainer within sandbox \"fd8175467504e83a932c8210894d62ac252f3ff504d24c384b6240e33b34ffac\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"ca64fde859a5a8dc839bf24ba54a85c7b23459e3ebd936ec482f1f9f3b9796a9\"" Feb 13 19:04:18.827270 containerd[1926]: time="2025-02-13T19:04:18.827085620Z" level=info msg="StartContainer for \"ca64fde859a5a8dc839bf24ba54a85c7b23459e3ebd936ec482f1f9f3b9796a9\"" Feb 13 19:04:18.879665 systemd[1]: Started cri-containerd-ca64fde859a5a8dc839bf24ba54a85c7b23459e3ebd936ec482f1f9f3b9796a9.scope - libcontainer container ca64fde859a5a8dc839bf24ba54a85c7b23459e3ebd936ec482f1f9f3b9796a9. Feb 13 19:04:18.928107 containerd[1926]: time="2025-02-13T19:04:18.928047250Z" level=info msg="StartContainer for \"ca64fde859a5a8dc839bf24ba54a85c7b23459e3ebd936ec482f1f9f3b9796a9\" returns successfully" Feb 13 19:04:19.267505 kubelet[2411]: I0213 19:04:19.267419 2411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=32.885905227 podStartE2EDuration="33.267379076s" podCreationTimestamp="2025-02-13 19:03:46 +0000 UTC" firstStartedPulling="2025-02-13 19:04:18.419736815 +0000 UTC m=+68.987078539" lastFinishedPulling="2025-02-13 19:04:18.801210676 +0000 UTC m=+69.368552388" observedRunningTime="2025-02-13 19:04:19.266495407 +0000 UTC m=+69.833837143" watchObservedRunningTime="2025-02-13 19:04:19.267379076 +0000 UTC m=+69.834720788" Feb 13 19:04:19.371067 systemd[1]: run-containerd-runc-k8s.io-ca64fde859a5a8dc839bf24ba54a85c7b23459e3ebd936ec482f1f9f3b9796a9-runc.9MPYTG.mount: Deactivated successfully. Feb 13 19:04:19.723529 kubelet[2411]: E0213 19:04:19.723459 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:19.912644 systemd-networkd[1831]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 19:04:20.723849 kubelet[2411]: E0213 19:04:20.723782 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:21.725008 kubelet[2411]: E0213 19:04:21.724943 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:22.080139 ntpd[1905]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 19:04:22.080863 ntpd[1905]: 13 Feb 19:04:22 ntpd[1905]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 19:04:22.725749 kubelet[2411]: E0213 19:04:22.725693 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:23.726755 kubelet[2411]: E0213 19:04:23.726695 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:24.727349 kubelet[2411]: E0213 19:04:24.727274 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:25.728128 kubelet[2411]: E0213 19:04:25.728066 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:26.728451 kubelet[2411]: E0213 19:04:26.728390 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:27.729567 kubelet[2411]: E0213 19:04:27.729496 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:28.730080 kubelet[2411]: E0213 19:04:28.730014 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:29.730591 kubelet[2411]: E0213 19:04:29.730519 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:30.668617 kubelet[2411]: E0213 19:04:30.668548 2411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:30.731525 kubelet[2411]: E0213 19:04:30.731422 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:31.731618 kubelet[2411]: E0213 19:04:31.731547 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:32.732347 kubelet[2411]: E0213 19:04:32.732288 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:33.733375 kubelet[2411]: E0213 19:04:33.733302 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:34.734388 kubelet[2411]: E0213 19:04:34.734329 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:35.735070 kubelet[2411]: E0213 19:04:35.735012 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:36.735941 kubelet[2411]: E0213 19:04:36.735871 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:37.736670 kubelet[2411]: E0213 19:04:37.736611 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:38.736796 kubelet[2411]: E0213 19:04:38.736734 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:39.737770 kubelet[2411]: E0213 19:04:39.737713 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:40.738103 kubelet[2411]: E0213 19:04:40.738045 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:41.738455 kubelet[2411]: E0213 19:04:41.738398 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:42.739483 kubelet[2411]: E0213 19:04:42.739410 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:42.770159 kubelet[2411]: E0213 19:04:42.770122 2411 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 172.31.23.196)" Feb 13 19:04:43.740084 kubelet[2411]: E0213 19:04:43.740025 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:44.740257 kubelet[2411]: E0213 19:04:44.740167 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:45.741134 kubelet[2411]: E0213 19:04:45.741073 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:46.741913 kubelet[2411]: E0213 19:04:46.741846 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:47.742461 kubelet[2411]: E0213 19:04:47.742399 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:48.742827 kubelet[2411]: E0213 19:04:48.742769 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:49.743458 kubelet[2411]: E0213 19:04:49.743396 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:50.668976 kubelet[2411]: E0213 19:04:50.668916 2411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:50.743747 kubelet[2411]: E0213 19:04:50.743685 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:51.744893 kubelet[2411]: E0213 19:04:51.744834 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:52.745271 kubelet[2411]: E0213 19:04:52.745198 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:52.766963 kubelet[2411]: E0213 19:04:52.766906 2411 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 172.31.23.196)" Feb 13 19:04:53.746239 kubelet[2411]: E0213 19:04:53.746180 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:54.746660 kubelet[2411]: E0213 19:04:54.746601 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:55.747327 kubelet[2411]: E0213 19:04:55.747258 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:56.747608 kubelet[2411]: E0213 19:04:56.747552 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:57.748632 kubelet[2411]: E0213 19:04:57.748570 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:58.749772 kubelet[2411]: E0213 19:04:58.749715 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:59.750116 kubelet[2411]: E0213 19:04:59.750045 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:00.751127 kubelet[2411]: E0213 19:05:00.751040 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:01.752164 kubelet[2411]: E0213 19:05:01.752123 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:02.753943 kubelet[2411]: E0213 19:05:02.753872 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:02.764184 kubelet[2411]: E0213 19:05:02.764129 2411 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 172.31.23.196)" Feb 13 19:05:03.755009 kubelet[2411]: E0213 19:05:03.754934 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:04.755746 kubelet[2411]: E0213 19:05:04.755684 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:04.784259 kubelet[2411]: E0213 19:05:04.781252 2411 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.196?timeout=10s\": unexpected EOF" Feb 13 19:05:04.794686 kubelet[2411]: E0213 19:05:04.794619 2411 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.196?timeout=10s\": read tcp 172.31.23.196:42318->172.31.27.144:6443: read: connection reset by peer" Feb 13 19:05:04.794686 kubelet[2411]: I0213 19:05:04.794681 2411 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 13 19:05:04.795245 kubelet[2411]: E0213 19:05:04.795166 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.196?timeout=10s\": dial tcp 172.31.27.144:6443: connect: connection refused" interval="200ms" Feb 13 19:05:04.996998 kubelet[2411]: E0213 19:05:04.996942 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.196?timeout=10s\": dial tcp 172.31.27.144:6443: connect: connection refused" interval="400ms" Feb 13 19:05:05.398879 kubelet[2411]: E0213 19:05:05.398812 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.196?timeout=10s\": dial tcp 172.31.27.144:6443: connect: connection refused" interval="800ms" Feb 13 19:05:05.756886 kubelet[2411]: E0213 19:05:05.756738 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:06.757452 kubelet[2411]: E0213 19:05:06.757383 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:07.758242 kubelet[2411]: E0213 19:05:07.758150 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:08.759000 kubelet[2411]: E0213 19:05:08.758942 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:09.759302 kubelet[2411]: E0213 19:05:09.759242 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:10.669281 kubelet[2411]: E0213 19:05:10.669199 2411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:10.760418 kubelet[2411]: E0213 19:05:10.760357 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:11.761155 kubelet[2411]: E0213 19:05:11.761097 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:12.761356 kubelet[2411]: E0213 19:05:12.761290 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:13.761929 kubelet[2411]: E0213 19:05:13.761862 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:14.762995 kubelet[2411]: E0213 19:05:14.762917 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:15.763670 kubelet[2411]: E0213 19:05:15.763607 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:16.200450 kubelet[2411]: E0213 19:05:16.200319 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.196?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 13 19:05:16.764817 kubelet[2411]: E0213 19:05:16.764761 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:17.765648 kubelet[2411]: E0213 19:05:17.765583 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:18.766205 kubelet[2411]: E0213 19:05:18.766148 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:19.766671 kubelet[2411]: E0213 19:05:19.766611 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:20.767457 kubelet[2411]: E0213 19:05:20.767401 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:21.767792 kubelet[2411]: E0213 19:05:21.767723 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:22.767924 kubelet[2411]: E0213 19:05:22.767865 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:23.768499 kubelet[2411]: E0213 19:05:23.768441 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:24.769153 kubelet[2411]: E0213 19:05:24.769084 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:25.769839 kubelet[2411]: E0213 19:05:25.769771 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:26.770659 kubelet[2411]: E0213 19:05:26.770585 2411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"