Feb 13 18:51:42.174752 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 18:51:42.174796 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:29:42 -00 2025 Feb 13 18:51:42.174821 kernel: KASLR disabled due to lack of seed Feb 13 18:51:42.174837 kernel: efi: EFI v2.7 by EDK II Feb 13 18:51:42.174853 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Feb 13 18:51:42.174868 kernel: secureboot: Secure boot disabled Feb 13 18:51:42.174886 kernel: ACPI: Early table checksum verification disabled Feb 13 18:51:42.174901 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 18:51:42.174917 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 18:51:42.174932 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 18:51:42.174952 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 18:51:42.174968 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 18:51:42.174983 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 18:51:42.174999 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 18:51:42.175017 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 18:51:42.175038 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 18:51:42.175055 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 18:51:42.175072 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 18:51:42.175105 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 18:51:42.175129 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 18:51:42.175146 kernel: printk: bootconsole [uart0] enabled Feb 13 18:51:42.175163 kernel: NUMA: Failed to initialise from firmware Feb 13 18:51:42.175180 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 18:51:42.175197 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 18:51:42.175213 kernel: Zone ranges: Feb 13 18:51:42.175230 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 18:51:42.175253 kernel: DMA32 empty Feb 13 18:51:42.175270 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 18:51:42.175286 kernel: Movable zone start for each node Feb 13 18:51:42.175302 kernel: Early memory node ranges Feb 13 18:51:42.175318 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 18:51:42.175334 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 18:51:42.175351 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 18:51:42.175367 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 18:51:42.175383 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 18:51:42.175399 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 18:51:42.175416 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 18:51:42.175432 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 18:51:42.175453 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 18:51:42.175470 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 18:51:42.175494 kernel: psci: probing for conduit method from ACPI. Feb 13 18:51:42.175512 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 18:51:42.175529 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 18:51:42.175551 kernel: psci: Trusted OS migration not required Feb 13 18:51:42.175588 kernel: psci: SMC Calling Convention v1.1 Feb 13 18:51:42.175609 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 18:51:42.175626 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 18:51:42.175644 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 18:51:42.175661 kernel: Detected PIPT I-cache on CPU0 Feb 13 18:51:42.175678 kernel: CPU features: detected: GIC system register CPU interface Feb 13 18:51:42.175696 kernel: CPU features: detected: Spectre-v2 Feb 13 18:51:42.175713 kernel: CPU features: detected: Spectre-v3a Feb 13 18:51:42.175730 kernel: CPU features: detected: Spectre-BHB Feb 13 18:51:42.175747 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 18:51:42.175764 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 18:51:42.175787 kernel: alternatives: applying boot alternatives Feb 13 18:51:42.175807 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b Feb 13 18:51:42.175825 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 18:51:42.175843 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 18:51:42.175860 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 18:51:42.175877 kernel: Fallback order for Node 0: 0 Feb 13 18:51:42.175894 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 18:51:42.175911 kernel: Policy zone: Normal Feb 13 18:51:42.175928 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 18:51:42.175945 kernel: software IO TLB: area num 2. Feb 13 18:51:42.175967 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 18:51:42.175985 kernel: Memory: 3819640K/4030464K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 210824K reserved, 0K cma-reserved) Feb 13 18:51:42.176002 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 18:51:42.176019 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 18:51:42.176038 kernel: rcu: RCU event tracing is enabled. Feb 13 18:51:42.176055 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 18:51:42.176073 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 18:51:42.177248 kernel: Tracing variant of Tasks RCU enabled. Feb 13 18:51:42.177287 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 18:51:42.177305 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 18:51:42.177323 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 18:51:42.177350 kernel: GICv3: 96 SPIs implemented Feb 13 18:51:42.177368 kernel: GICv3: 0 Extended SPIs implemented Feb 13 18:51:42.177385 kernel: Root IRQ handler: gic_handle_irq Feb 13 18:51:42.177402 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 18:51:42.177420 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 18:51:42.177437 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 18:51:42.177454 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 18:51:42.177472 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 18:51:42.177490 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 18:51:42.177507 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 18:51:42.177524 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 18:51:42.177541 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 18:51:42.177564 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 18:51:42.177581 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 18:51:42.177599 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 18:51:42.177617 kernel: Console: colour dummy device 80x25 Feb 13 18:51:42.177635 kernel: printk: console [tty1] enabled Feb 13 18:51:42.177653 kernel: ACPI: Core revision 20230628 Feb 13 18:51:42.177671 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 18:51:42.177689 kernel: pid_max: default: 32768 minimum: 301 Feb 13 18:51:42.177706 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 18:51:42.177724 kernel: landlock: Up and running. Feb 13 18:51:42.177746 kernel: SELinux: Initializing. Feb 13 18:51:42.177763 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 18:51:42.177781 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 18:51:42.177798 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 18:51:42.177816 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 18:51:42.177833 kernel: rcu: Hierarchical SRCU implementation. Feb 13 18:51:42.177851 kernel: rcu: Max phase no-delay instances is 400. Feb 13 18:51:42.177868 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 18:51:42.177890 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 18:51:42.177908 kernel: Remapping and enabling EFI services. Feb 13 18:51:42.177925 kernel: smp: Bringing up secondary CPUs ... Feb 13 18:51:42.177942 kernel: Detected PIPT I-cache on CPU1 Feb 13 18:51:42.177960 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 18:51:42.177977 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 18:51:42.177995 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 18:51:42.178012 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 18:51:42.178030 kernel: SMP: Total of 2 processors activated. Feb 13 18:51:42.178047 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 18:51:42.178069 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 18:51:42.178087 kernel: CPU features: detected: CRC32 instructions Feb 13 18:51:42.178141 kernel: CPU: All CPU(s) started at EL1 Feb 13 18:51:42.178165 kernel: alternatives: applying system-wide alternatives Feb 13 18:51:42.178183 kernel: devtmpfs: initialized Feb 13 18:51:42.178201 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 18:51:42.178219 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 18:51:42.178238 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 18:51:42.178256 kernel: SMBIOS 3.0.0 present. Feb 13 18:51:42.178279 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 18:51:42.178297 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 18:51:42.178316 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 18:51:42.178334 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 18:51:42.178352 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 18:51:42.178371 kernel: audit: initializing netlink subsys (disabled) Feb 13 18:51:42.178389 kernel: audit: type=2000 audit(0.221:1): state=initialized audit_enabled=0 res=1 Feb 13 18:51:42.178412 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 18:51:42.178430 kernel: cpuidle: using governor menu Feb 13 18:51:42.178448 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 18:51:42.178466 kernel: ASID allocator initialised with 65536 entries Feb 13 18:51:42.178484 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 18:51:42.178503 kernel: Serial: AMBA PL011 UART driver Feb 13 18:51:42.178521 kernel: Modules: 17360 pages in range for non-PLT usage Feb 13 18:51:42.178539 kernel: Modules: 508880 pages in range for PLT usage Feb 13 18:51:42.178557 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 18:51:42.178580 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 18:51:42.178599 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 18:51:42.178617 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 18:51:42.178635 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 18:51:42.178653 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 18:51:42.178671 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 18:51:42.178690 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 18:51:42.178708 kernel: ACPI: Added _OSI(Module Device) Feb 13 18:51:42.178726 kernel: ACPI: Added _OSI(Processor Device) Feb 13 18:51:42.178748 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 18:51:42.178767 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 18:51:42.178785 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 18:51:42.178803 kernel: ACPI: Interpreter enabled Feb 13 18:51:42.178821 kernel: ACPI: Using GIC for interrupt routing Feb 13 18:51:42.178839 kernel: ACPI: MCFG table detected, 1 entries Feb 13 18:51:42.178858 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 18:51:42.179197 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 18:51:42.179421 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 18:51:42.179644 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 18:51:42.179857 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 18:51:42.180061 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 18:51:42.180086 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 18:51:42.180134 kernel: acpiphp: Slot [1] registered Feb 13 18:51:42.180178 kernel: acpiphp: Slot [2] registered Feb 13 18:51:42.180198 kernel: acpiphp: Slot [3] registered Feb 13 18:51:42.180223 kernel: acpiphp: Slot [4] registered Feb 13 18:51:42.180242 kernel: acpiphp: Slot [5] registered Feb 13 18:51:42.180260 kernel: acpiphp: Slot [6] registered Feb 13 18:51:42.180278 kernel: acpiphp: Slot [7] registered Feb 13 18:51:42.180296 kernel: acpiphp: Slot [8] registered Feb 13 18:51:42.180314 kernel: acpiphp: Slot [9] registered Feb 13 18:51:42.180332 kernel: acpiphp: Slot [10] registered Feb 13 18:51:42.180351 kernel: acpiphp: Slot [11] registered Feb 13 18:51:42.180369 kernel: acpiphp: Slot [12] registered Feb 13 18:51:42.180387 kernel: acpiphp: Slot [13] registered Feb 13 18:51:42.180410 kernel: acpiphp: Slot [14] registered Feb 13 18:51:42.180428 kernel: acpiphp: Slot [15] registered Feb 13 18:51:42.180446 kernel: acpiphp: Slot [16] registered Feb 13 18:51:42.180463 kernel: acpiphp: Slot [17] registered Feb 13 18:51:42.180482 kernel: acpiphp: Slot [18] registered Feb 13 18:51:42.180499 kernel: acpiphp: Slot [19] registered Feb 13 18:51:42.180517 kernel: acpiphp: Slot [20] registered Feb 13 18:51:42.180535 kernel: acpiphp: Slot [21] registered Feb 13 18:51:42.180553 kernel: acpiphp: Slot [22] registered Feb 13 18:51:42.180576 kernel: acpiphp: Slot [23] registered Feb 13 18:51:42.180594 kernel: acpiphp: Slot [24] registered Feb 13 18:51:42.180612 kernel: acpiphp: Slot [25] registered Feb 13 18:51:42.180630 kernel: acpiphp: Slot [26] registered Feb 13 18:51:42.180648 kernel: acpiphp: Slot [27] registered Feb 13 18:51:42.180666 kernel: acpiphp: Slot [28] registered Feb 13 18:51:42.180684 kernel: acpiphp: Slot [29] registered Feb 13 18:51:42.180702 kernel: acpiphp: Slot [30] registered Feb 13 18:51:42.180720 kernel: acpiphp: Slot [31] registered Feb 13 18:51:42.180738 kernel: PCI host bridge to bus 0000:00 Feb 13 18:51:42.180948 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 18:51:42.181235 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 18:51:42.181995 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 18:51:42.183378 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 18:51:42.183646 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 18:51:42.183893 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 18:51:42.185158 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 18:51:42.185429 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 18:51:42.185636 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 18:51:42.185841 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 18:51:42.186066 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 18:51:42.187385 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 18:51:42.187622 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 18:51:42.187840 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 18:51:42.188045 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 18:51:42.188289 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 18:51:42.188508 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 18:51:42.188730 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 18:51:42.188937 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 18:51:42.191289 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 18:51:42.191604 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 18:51:42.191819 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 18:51:42.192024 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 18:51:42.192051 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 18:51:42.192072 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 18:51:42.193196 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 18:51:42.193237 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 18:51:42.193257 kernel: iommu: Default domain type: Translated Feb 13 18:51:42.193288 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 18:51:42.193309 kernel: efivars: Registered efivars operations Feb 13 18:51:42.193330 kernel: vgaarb: loaded Feb 13 18:51:42.193348 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 18:51:42.193368 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 18:51:42.193386 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 18:51:42.193405 kernel: pnp: PnP ACPI init Feb 13 18:51:42.193690 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 18:51:42.193727 kernel: pnp: PnP ACPI: found 1 devices Feb 13 18:51:42.193747 kernel: NET: Registered PF_INET protocol family Feb 13 18:51:42.193766 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 18:51:42.193789 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 18:51:42.193816 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 18:51:42.193860 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 18:51:42.193917 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 18:51:42.193956 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 18:51:42.193980 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 18:51:42.194020 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 18:51:42.194039 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 18:51:42.194058 kernel: PCI: CLS 0 bytes, default 64 Feb 13 18:51:42.194076 kernel: kvm [1]: HYP mode not available Feb 13 18:51:42.195148 kernel: Initialise system trusted keyrings Feb 13 18:51:42.195176 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 18:51:42.195195 kernel: Key type asymmetric registered Feb 13 18:51:42.195213 kernel: Asymmetric key parser 'x509' registered Feb 13 18:51:42.195231 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 18:51:42.195258 kernel: io scheduler mq-deadline registered Feb 13 18:51:42.195277 kernel: io scheduler kyber registered Feb 13 18:51:42.195295 kernel: io scheduler bfq registered Feb 13 18:51:42.195555 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 18:51:42.195599 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 18:51:42.195619 kernel: ACPI: button: Power Button [PWRB] Feb 13 18:51:42.195638 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 18:51:42.195656 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 18:51:42.195681 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 18:51:42.195701 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 18:51:42.195916 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 18:51:42.195942 kernel: printk: console [ttyS0] disabled Feb 13 18:51:42.195961 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 18:51:42.195979 kernel: printk: console [ttyS0] enabled Feb 13 18:51:42.195997 kernel: printk: bootconsole [uart0] disabled Feb 13 18:51:42.196015 kernel: thunder_xcv, ver 1.0 Feb 13 18:51:42.196034 kernel: thunder_bgx, ver 1.0 Feb 13 18:51:42.196052 kernel: nicpf, ver 1.0 Feb 13 18:51:42.196076 kernel: nicvf, ver 1.0 Feb 13 18:51:42.196329 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 18:51:42.196529 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T18:51:41 UTC (1739472701) Feb 13 18:51:42.196554 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 18:51:42.196573 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 18:51:42.196593 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 18:51:42.196611 kernel: watchdog: Hard watchdog permanently disabled Feb 13 18:51:42.196636 kernel: NET: Registered PF_INET6 protocol family Feb 13 18:51:42.196655 kernel: Segment Routing with IPv6 Feb 13 18:51:42.196673 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 18:51:42.196691 kernel: NET: Registered PF_PACKET protocol family Feb 13 18:51:42.196709 kernel: Key type dns_resolver registered Feb 13 18:51:42.196727 kernel: registered taskstats version 1 Feb 13 18:51:42.196745 kernel: Loading compiled-in X.509 certificates Feb 13 18:51:42.196764 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 987d382bd4f498c8030ef29b348ef5d6fcf1f0e3' Feb 13 18:51:42.196782 kernel: Key type .fscrypt registered Feb 13 18:51:42.196799 kernel: Key type fscrypt-provisioning registered Feb 13 18:51:42.196823 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 18:51:42.196841 kernel: ima: Allocated hash algorithm: sha1 Feb 13 18:51:42.196859 kernel: ima: No architecture policies found Feb 13 18:51:42.196877 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 18:51:42.196895 kernel: clk: Disabling unused clocks Feb 13 18:51:42.196913 kernel: Freeing unused kernel memory: 39936K Feb 13 18:51:42.196931 kernel: Run /init as init process Feb 13 18:51:42.196950 kernel: with arguments: Feb 13 18:51:42.196968 kernel: /init Feb 13 18:51:42.196990 kernel: with environment: Feb 13 18:51:42.197008 kernel: HOME=/ Feb 13 18:51:42.197026 kernel: TERM=linux Feb 13 18:51:42.197044 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 18:51:42.197066 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 18:51:42.199728 systemd[1]: Detected virtualization amazon. Feb 13 18:51:42.199778 systemd[1]: Detected architecture arm64. Feb 13 18:51:42.199808 systemd[1]: Running in initrd. Feb 13 18:51:42.199829 systemd[1]: No hostname configured, using default hostname. Feb 13 18:51:42.199849 systemd[1]: Hostname set to . Feb 13 18:51:42.199870 systemd[1]: Initializing machine ID from VM UUID. Feb 13 18:51:42.199891 systemd[1]: Queued start job for default target initrd.target. Feb 13 18:51:42.199912 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 18:51:42.199934 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 18:51:42.199956 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 18:51:42.199983 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 18:51:42.200005 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 18:51:42.200026 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 18:51:42.200052 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 18:51:42.200074 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 18:51:42.200174 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 18:51:42.200204 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 18:51:42.200233 systemd[1]: Reached target paths.target - Path Units. Feb 13 18:51:42.200255 systemd[1]: Reached target slices.target - Slice Units. Feb 13 18:51:42.200276 systemd[1]: Reached target swap.target - Swaps. Feb 13 18:51:42.200298 systemd[1]: Reached target timers.target - Timer Units. Feb 13 18:51:42.200319 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 18:51:42.200339 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 18:51:42.200359 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 18:51:42.200380 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 18:51:42.200399 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 18:51:42.200425 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 18:51:42.200445 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 18:51:42.200465 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 18:51:42.200485 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 18:51:42.200505 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 18:51:42.200525 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 18:51:42.200546 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 18:51:42.200566 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 18:51:42.200593 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 18:51:42.200613 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:51:42.200633 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 18:51:42.200654 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 18:51:42.200674 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 18:51:42.200695 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 18:51:42.200780 systemd-journald[252]: Collecting audit messages is disabled. Feb 13 18:51:42.200824 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:51:42.200846 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:51:42.200872 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 18:51:42.200892 systemd-journald[252]: Journal started Feb 13 18:51:42.200937 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2802a91879474fafd8dd4bdd0c1f95) is 8.0M, max 75.3M, 67.3M free. Feb 13 18:51:42.154302 systemd-modules-load[253]: Inserted module 'overlay' Feb 13 18:51:42.210121 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 18:51:42.210556 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 18:51:42.216771 systemd-modules-load[253]: Inserted module 'br_netfilter' Feb 13 18:51:42.219743 kernel: Bridge firewalling registered Feb 13 18:51:42.220346 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 18:51:42.227363 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 18:51:42.231476 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 18:51:42.239376 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 18:51:42.271951 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:51:42.286551 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:51:42.311407 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 18:51:42.317231 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 18:51:42.336562 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 18:51:42.349184 dracut-cmdline[285]: dracut-dracut-053 Feb 13 18:51:42.356899 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b Feb 13 18:51:42.370495 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 18:51:42.442206 systemd-resolved[297]: Positive Trust Anchors: Feb 13 18:51:42.442242 systemd-resolved[297]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 18:51:42.442302 systemd-resolved[297]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 18:51:42.535134 kernel: SCSI subsystem initialized Feb 13 18:51:42.542182 kernel: Loading iSCSI transport class v2.0-870. Feb 13 18:51:42.555186 kernel: iscsi: registered transport (tcp) Feb 13 18:51:42.577386 kernel: iscsi: registered transport (qla4xxx) Feb 13 18:51:42.577465 kernel: QLogic iSCSI HBA Driver Feb 13 18:51:42.660465 kernel: random: crng init done Feb 13 18:51:42.660414 systemd-resolved[297]: Defaulting to hostname 'linux'. Feb 13 18:51:42.663883 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 18:51:42.667910 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 18:51:42.690203 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 18:51:42.700423 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 18:51:42.746244 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 18:51:42.746317 kernel: device-mapper: uevent: version 1.0.3 Feb 13 18:51:42.748149 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 18:51:42.813137 kernel: raid6: neonx8 gen() 6506 MB/s Feb 13 18:51:42.830124 kernel: raid6: neonx4 gen() 6464 MB/s Feb 13 18:51:42.847123 kernel: raid6: neonx2 gen() 5400 MB/s Feb 13 18:51:42.864124 kernel: raid6: neonx1 gen() 3921 MB/s Feb 13 18:51:42.881124 kernel: raid6: int64x8 gen() 3583 MB/s Feb 13 18:51:42.898123 kernel: raid6: int64x4 gen() 3672 MB/s Feb 13 18:51:42.915123 kernel: raid6: int64x2 gen() 3577 MB/s Feb 13 18:51:42.932899 kernel: raid6: int64x1 gen() 2748 MB/s Feb 13 18:51:42.932939 kernel: raid6: using algorithm neonx8 gen() 6506 MB/s Feb 13 18:51:42.950886 kernel: raid6: .... xor() 4785 MB/s, rmw enabled Feb 13 18:51:42.950924 kernel: raid6: using neon recovery algorithm Feb 13 18:51:42.958128 kernel: xor: measuring software checksum speed Feb 13 18:51:42.959122 kernel: 8regs : 11915 MB/sec Feb 13 18:51:42.961296 kernel: 32regs : 12010 MB/sec Feb 13 18:51:42.961328 kernel: arm64_neon : 9281 MB/sec Feb 13 18:51:42.961353 kernel: xor: using function: 32regs (12010 MB/sec) Feb 13 18:51:43.046142 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 18:51:43.063962 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 18:51:43.074407 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 18:51:43.115266 systemd-udevd[470]: Using default interface naming scheme 'v255'. Feb 13 18:51:43.125204 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 18:51:43.137613 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 18:51:43.174714 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Feb 13 18:51:43.234808 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 18:51:43.244438 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 18:51:43.363905 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 18:51:43.389662 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 18:51:43.432286 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 18:51:43.437521 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 18:51:43.443004 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 18:51:43.445456 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 18:51:43.461750 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 18:51:43.496576 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 18:51:43.563191 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 18:51:43.563261 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 18:51:43.599376 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 18:51:43.599666 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 18:51:43.599901 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:68:84:d7:e8:75 Feb 13 18:51:43.574999 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 18:51:43.575248 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:51:43.577936 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:51:43.580146 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 18:51:43.580394 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:51:43.582670 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:51:43.593924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:51:43.625522 (udev-worker)[531]: Network interface NamePolicy= disabled on kernel command line. Feb 13 18:51:43.631131 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 18:51:43.631201 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 18:51:43.641146 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 18:51:43.644641 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:51:43.653717 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 18:51:43.653807 kernel: GPT:9289727 != 16777215 Feb 13 18:51:43.653834 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 18:51:43.654921 kernel: GPT:9289727 != 16777215 Feb 13 18:51:43.655638 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 18:51:43.657128 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 18:51:43.659344 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:51:43.696167 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:51:43.740999 kernel: BTRFS: device fsid 55beb02a-1d0d-4a3e-812c-2737f0301ec8 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (523) Feb 13 18:51:43.762130 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (522) Feb 13 18:51:43.808500 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 18:51:43.839924 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 18:51:43.875259 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 18:51:43.900395 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 18:51:43.902851 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 18:51:43.921409 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 18:51:43.933910 disk-uuid[662]: Primary Header is updated. Feb 13 18:51:43.933910 disk-uuid[662]: Secondary Entries is updated. Feb 13 18:51:43.933910 disk-uuid[662]: Secondary Header is updated. Feb 13 18:51:43.943171 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 18:51:44.961163 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 18:51:44.961835 disk-uuid[663]: The operation has completed successfully. Feb 13 18:51:45.141368 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 18:51:45.141584 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 18:51:45.183423 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 18:51:45.190214 sh[923]: Success Feb 13 18:51:45.224130 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 18:51:45.352442 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 18:51:45.358721 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 18:51:45.360720 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 18:51:45.401422 kernel: BTRFS info (device dm-0): first mount of filesystem 55beb02a-1d0d-4a3e-812c-2737f0301ec8 Feb 13 18:51:45.401485 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:51:45.403230 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 18:51:45.403265 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 18:51:45.405500 kernel: BTRFS info (device dm-0): using free space tree Feb 13 18:51:45.432135 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 18:51:45.448604 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 18:51:45.452383 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 18:51:45.468449 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 18:51:45.475339 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 18:51:45.507000 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:51:45.507070 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:51:45.508637 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 18:51:45.516240 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 18:51:45.533477 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 18:51:45.536223 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:51:45.547196 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 18:51:45.558536 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 18:51:45.671082 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 18:51:45.685515 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 18:51:45.746374 systemd-networkd[1129]: lo: Link UP Feb 13 18:51:45.748283 systemd-networkd[1129]: lo: Gained carrier Feb 13 18:51:45.752744 systemd-networkd[1129]: Enumeration completed Feb 13 18:51:45.755801 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 18:51:45.760811 systemd[1]: Reached target network.target - Network. Feb 13 18:51:45.762539 systemd-networkd[1129]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:51:45.762763 ignition[1038]: Ignition 2.20.0 Feb 13 18:51:45.762545 systemd-networkd[1129]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 18:51:45.762792 ignition[1038]: Stage: fetch-offline Feb 13 18:51:45.768369 ignition[1038]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:51:45.768394 ignition[1038]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 18:51:45.768837 ignition[1038]: Ignition finished successfully Feb 13 18:51:45.780107 systemd-networkd[1129]: eth0: Link UP Feb 13 18:51:45.780126 systemd-networkd[1129]: eth0: Gained carrier Feb 13 18:51:45.780145 systemd-networkd[1129]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:51:45.790157 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 18:51:45.808217 systemd-networkd[1129]: eth0: DHCPv4 address 172.31.25.248/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 18:51:45.808838 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 18:51:45.837634 ignition[1138]: Ignition 2.20.0 Feb 13 18:51:45.837665 ignition[1138]: Stage: fetch Feb 13 18:51:45.839394 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:51:45.839420 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 18:51:45.840554 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 18:51:45.863859 ignition[1138]: PUT result: OK Feb 13 18:51:45.866703 ignition[1138]: parsed url from cmdline: "" Feb 13 18:51:45.866831 ignition[1138]: no config URL provided Feb 13 18:51:45.866850 ignition[1138]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 18:51:45.866875 ignition[1138]: no config at "/usr/lib/ignition/user.ign" Feb 13 18:51:45.866908 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 18:51:45.868714 ignition[1138]: PUT result: OK Feb 13 18:51:45.870741 ignition[1138]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 18:51:45.878075 ignition[1138]: GET result: OK Feb 13 18:51:45.879378 ignition[1138]: parsing config with SHA512: 84ee999b9595a54ab220d7e44a7e0b0ec3ed4f6f3867ada35822d5f98af407d628013571a1e7d305bc52b4fc6b800fac110aa088ade231cb5ca756b917cb8a5f Feb 13 18:51:45.884597 unknown[1138]: fetched base config from "system" Feb 13 18:51:45.884618 unknown[1138]: fetched base config from "system" Feb 13 18:51:45.885063 ignition[1138]: fetch: fetch complete Feb 13 18:51:45.884632 unknown[1138]: fetched user config from "aws" Feb 13 18:51:45.885075 ignition[1138]: fetch: fetch passed Feb 13 18:51:45.893401 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 18:51:45.885195 ignition[1138]: Ignition finished successfully Feb 13 18:51:45.918450 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 18:51:45.940695 ignition[1146]: Ignition 2.20.0 Feb 13 18:51:45.941243 ignition[1146]: Stage: kargs Feb 13 18:51:45.941859 ignition[1146]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:51:45.941884 ignition[1146]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 18:51:45.942069 ignition[1146]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 18:51:45.946243 ignition[1146]: PUT result: OK Feb 13 18:51:45.953886 ignition[1146]: kargs: kargs passed Feb 13 18:51:45.953990 ignition[1146]: Ignition finished successfully Feb 13 18:51:45.958654 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 18:51:45.973440 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 18:51:45.996005 ignition[1152]: Ignition 2.20.0 Feb 13 18:51:45.996034 ignition[1152]: Stage: disks Feb 13 18:51:45.997651 ignition[1152]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:51:45.997677 ignition[1152]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 18:51:45.998749 ignition[1152]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 18:51:46.004829 ignition[1152]: PUT result: OK Feb 13 18:51:46.008641 ignition[1152]: disks: disks passed Feb 13 18:51:46.008791 ignition[1152]: Ignition finished successfully Feb 13 18:51:46.011482 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 18:51:46.017388 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 18:51:46.021537 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 18:51:46.024749 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 18:51:46.029465 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 18:51:46.045562 systemd[1]: Reached target basic.target - Basic System. Feb 13 18:51:46.065464 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 18:51:46.112169 systemd-fsck[1161]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 18:51:46.117762 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 18:51:46.133416 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 18:51:46.213356 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 005a6458-8fd3-46f1-ab43-85ef18df7ccd r/w with ordered data mode. Quota mode: none. Feb 13 18:51:46.214318 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 18:51:46.217989 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 18:51:46.239273 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 18:51:46.245707 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 18:51:46.249440 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 18:51:46.249527 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 18:51:46.249575 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 18:51:46.273147 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1180) Feb 13 18:51:46.278683 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:51:46.278746 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:51:46.280374 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 18:51:46.284877 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 18:51:46.294411 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 18:51:46.294459 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 18:51:46.300656 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 18:51:46.384888 initrd-setup-root[1204]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 18:51:46.393845 initrd-setup-root[1211]: cut: /sysroot/etc/group: No such file or directory Feb 13 18:51:46.402384 initrd-setup-root[1218]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 18:51:46.411058 initrd-setup-root[1225]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 18:51:46.569987 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 18:51:46.578323 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 18:51:46.583430 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 18:51:46.611878 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 18:51:46.614133 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:51:46.649680 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 18:51:46.657130 ignition[1295]: INFO : Ignition 2.20.0 Feb 13 18:51:46.657130 ignition[1295]: INFO : Stage: mount Feb 13 18:51:46.657130 ignition[1295]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:51:46.657130 ignition[1295]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 18:51:46.657130 ignition[1295]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 18:51:46.667176 ignition[1295]: INFO : PUT result: OK Feb 13 18:51:46.671472 ignition[1295]: INFO : mount: mount passed Feb 13 18:51:46.671472 ignition[1295]: INFO : Ignition finished successfully Feb 13 18:51:46.676402 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 18:51:46.687285 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 18:51:46.717502 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 18:51:46.750633 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1306) Feb 13 18:51:46.750698 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:51:46.750724 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:51:46.753280 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 18:51:46.758115 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 18:51:46.761429 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 18:51:46.800078 ignition[1323]: INFO : Ignition 2.20.0 Feb 13 18:51:46.802962 ignition[1323]: INFO : Stage: files Feb 13 18:51:46.802962 ignition[1323]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:51:46.802962 ignition[1323]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 18:51:46.802962 ignition[1323]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 18:51:46.811211 ignition[1323]: INFO : PUT result: OK Feb 13 18:51:46.815052 ignition[1323]: DEBUG : files: compiled without relabeling support, skipping Feb 13 18:51:46.817346 ignition[1323]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 18:51:46.817346 ignition[1323]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 18:51:46.827641 ignition[1323]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 18:51:46.830521 ignition[1323]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 18:51:46.833376 unknown[1323]: wrote ssh authorized keys file for user: core Feb 13 18:51:46.837325 ignition[1323]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 18:51:46.837325 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 18:51:46.844718 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 18:51:46.844718 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 18:51:46.844718 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 18:51:46.844718 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 18:51:46.844718 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 18:51:46.844718 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 18:51:46.844718 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 18:51:47.254918 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 18:51:47.372490 systemd-networkd[1129]: eth0: Gained IPv6LL Feb 13 18:51:47.609627 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 18:51:47.614081 ignition[1323]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 18:51:47.614081 ignition[1323]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 18:51:47.614081 ignition[1323]: INFO : files: files passed Feb 13 18:51:47.614081 ignition[1323]: INFO : Ignition finished successfully Feb 13 18:51:47.625225 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 18:51:47.644424 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 18:51:47.653468 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 18:51:47.658813 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 18:51:47.659896 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 18:51:47.687128 initrd-setup-root-after-ignition[1352]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:51:47.687128 initrd-setup-root-after-ignition[1352]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:51:47.694568 initrd-setup-root-after-ignition[1356]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:51:47.700823 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 18:51:47.705280 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 18:51:47.723449 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 18:51:47.779867 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 18:51:47.780071 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 18:51:47.785227 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 18:51:47.790665 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 18:51:47.792692 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 18:51:47.803419 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 18:51:47.834324 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 18:51:47.844399 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 18:51:47.872942 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 18:51:47.878052 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 18:51:47.880629 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 18:51:47.882579 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 18:51:47.882880 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 18:51:47.885673 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 18:51:47.887903 systemd[1]: Stopped target basic.target - Basic System. Feb 13 18:51:47.889882 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 18:51:47.892163 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 18:51:47.894500 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 18:51:47.896788 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 18:51:47.898910 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 18:51:47.901394 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 18:51:47.903498 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 18:51:47.905574 systemd[1]: Stopped target swap.target - Swaps. Feb 13 18:51:47.907289 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 18:51:47.907603 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 18:51:47.910183 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 18:51:47.912521 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 18:51:47.914960 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 18:51:47.917245 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 18:51:47.921655 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 18:51:47.921868 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 18:51:47.924306 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 18:51:47.924527 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 18:51:47.927060 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 18:51:47.927279 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 18:51:47.988292 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 18:51:48.004801 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 18:51:48.009480 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 18:51:48.011837 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 18:51:48.014450 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 18:51:48.014686 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 18:51:48.034486 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 18:51:48.034892 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 18:51:48.046584 ignition[1376]: INFO : Ignition 2.20.0 Feb 13 18:51:48.046584 ignition[1376]: INFO : Stage: umount Feb 13 18:51:48.052394 ignition[1376]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:51:48.052394 ignition[1376]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 18:51:48.052394 ignition[1376]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 18:51:48.052394 ignition[1376]: INFO : PUT result: OK Feb 13 18:51:48.073006 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 18:51:48.076943 ignition[1376]: INFO : umount: umount passed Feb 13 18:51:48.076943 ignition[1376]: INFO : Ignition finished successfully Feb 13 18:51:48.076929 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 18:51:48.088954 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 18:51:48.090022 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 18:51:48.090124 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 18:51:48.101313 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 18:51:48.101431 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 18:51:48.105528 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 18:51:48.105626 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 18:51:48.111330 systemd[1]: Stopped target network.target - Network. Feb 13 18:51:48.112967 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 18:51:48.113058 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 18:51:48.114487 systemd[1]: Stopped target paths.target - Path Units. Feb 13 18:51:48.114744 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 18:51:48.128782 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 18:51:48.131478 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 18:51:48.135612 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 18:51:48.137422 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 18:51:48.137501 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 18:51:48.139358 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 18:51:48.139420 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 18:51:48.141340 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 18:51:48.141421 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 18:51:48.143308 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 18:51:48.143382 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 18:51:48.145880 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 18:51:48.149196 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 18:51:48.153792 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 18:51:48.153965 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 18:51:48.156499 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 18:51:48.156654 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 18:51:48.158787 systemd-networkd[1129]: eth0: DHCPv6 lease lost Feb 13 18:51:48.162038 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 18:51:48.162292 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 18:51:48.167403 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 18:51:48.167546 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 18:51:48.208233 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 18:51:48.210255 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 18:51:48.210379 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 18:51:48.218837 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 18:51:48.221346 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 18:51:48.221646 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 18:51:48.242894 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 18:51:48.242995 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:51:48.245290 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 18:51:48.245384 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 18:51:48.247544 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 18:51:48.247626 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 18:51:48.276793 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 18:51:48.278951 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 18:51:48.287195 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 18:51:48.289142 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 18:51:48.294421 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 18:51:48.294523 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 18:51:48.296715 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 18:51:48.296784 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 18:51:48.298752 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 18:51:48.298836 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 18:51:48.301075 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 18:51:48.301170 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 18:51:48.318221 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 18:51:48.318323 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:51:48.338471 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 18:51:48.343464 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 18:51:48.343599 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 18:51:48.346449 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 18:51:48.346538 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 18:51:48.357762 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 18:51:48.357859 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 18:51:48.360240 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 18:51:48.360318 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:51:48.381727 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 18:51:48.382084 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 18:51:48.391193 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 18:51:48.410323 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 18:51:48.426651 systemd[1]: Switching root. Feb 13 18:51:48.465247 systemd-journald[252]: Journal stopped Feb 13 18:51:50.377571 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Feb 13 18:51:50.377712 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 18:51:50.378226 kernel: SELinux: policy capability open_perms=1 Feb 13 18:51:50.378275 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 18:51:50.378318 kernel: SELinux: policy capability always_check_network=0 Feb 13 18:51:50.378350 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 18:51:50.378384 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 18:51:50.378416 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 18:51:50.378446 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 18:51:50.378476 kernel: audit: type=1403 audit(1739472708.813:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 18:51:50.378509 systemd[1]: Successfully loaded SELinux policy in 48.899ms. Feb 13 18:51:50.378576 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.298ms. Feb 13 18:51:50.378613 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 18:51:50.378645 systemd[1]: Detected virtualization amazon. Feb 13 18:51:50.378677 systemd[1]: Detected architecture arm64. Feb 13 18:51:50.378707 systemd[1]: Detected first boot. Feb 13 18:51:50.378741 systemd[1]: Initializing machine ID from VM UUID. Feb 13 18:51:50.378777 zram_generator::config[1419]: No configuration found. Feb 13 18:51:50.378810 systemd[1]: Populated /etc with preset unit settings. Feb 13 18:51:50.378857 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 18:51:50.378892 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 18:51:50.378925 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 18:51:50.378960 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 18:51:50.378990 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 18:51:50.379021 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 18:51:50.379056 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 18:51:50.382570 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 18:51:50.382641 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 18:51:50.383061 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 18:51:50.384347 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 18:51:50.384540 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 18:51:50.384573 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 18:51:50.384604 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 18:51:50.384642 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 18:51:50.384676 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 18:51:50.384707 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 18:51:50.384739 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 18:51:50.384770 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 18:51:50.384799 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 18:51:50.384830 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 18:51:50.384862 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 18:51:50.384897 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 18:51:50.384927 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 18:51:50.384957 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 18:51:50.384988 systemd[1]: Reached target slices.target - Slice Units. Feb 13 18:51:50.385022 systemd[1]: Reached target swap.target - Swaps. Feb 13 18:51:50.385055 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 18:51:50.385086 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 18:51:50.387192 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 18:51:50.387226 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 18:51:50.387263 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 18:51:50.387292 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 18:51:50.387325 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 18:51:50.387356 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 18:51:50.387389 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 18:51:50.387422 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 18:51:50.387453 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 18:51:50.387506 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 18:51:50.387541 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 18:51:50.387577 systemd[1]: Reached target machines.target - Containers. Feb 13 18:51:50.387610 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 18:51:50.387639 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:51:50.387670 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 18:51:50.387700 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 18:51:50.387732 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:51:50.387762 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 18:51:50.387794 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:51:50.387826 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 18:51:50.387857 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 18:51:50.387888 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 18:51:50.387918 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 18:51:50.387948 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 18:51:50.387977 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 18:51:50.388005 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 18:51:50.388033 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 18:51:50.388060 kernel: fuse: init (API version 7.39) Feb 13 18:51:50.388125 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 18:51:50.388160 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 18:51:50.388188 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 18:51:50.388217 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 18:51:50.388248 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 18:51:50.388277 systemd[1]: Stopped verity-setup.service. Feb 13 18:51:50.388304 kernel: ACPI: bus type drm_connector registered Feb 13 18:51:50.388332 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 18:51:50.388362 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 18:51:50.388396 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 18:51:50.388425 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 18:51:50.388453 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 18:51:50.388481 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 18:51:50.388510 kernel: loop: module loaded Feb 13 18:51:50.388542 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 18:51:50.388571 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 18:51:50.388599 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 18:51:50.388629 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:51:50.388661 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:51:50.388689 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 18:51:50.388717 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 18:51:50.388748 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:51:50.388777 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:51:50.388809 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 18:51:50.388838 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 18:51:50.388870 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 18:51:50.388902 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 18:51:50.388931 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 18:51:50.388964 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 18:51:50.388995 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 18:51:50.389026 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 18:51:50.394186 systemd-journald[1501]: Collecting audit messages is disabled. Feb 13 18:51:50.394270 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 18:51:50.394304 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 18:51:50.394335 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 18:51:50.394373 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 18:51:50.394404 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 18:51:50.394434 systemd-journald[1501]: Journal started Feb 13 18:51:50.394490 systemd-journald[1501]: Runtime Journal (/run/log/journal/ec2802a91879474fafd8dd4bdd0c1f95) is 8.0M, max 75.3M, 67.3M free. Feb 13 18:51:49.777828 systemd[1]: Queued start job for default target multi-user.target. Feb 13 18:51:49.804475 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 18:51:49.805274 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 18:51:50.405219 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 18:51:50.415206 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 18:51:50.423153 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:51:50.441436 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 18:51:50.445250 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 18:51:50.464121 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 18:51:50.464211 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 18:51:50.475829 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 18:51:50.489505 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 18:51:50.501794 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 18:51:50.512121 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 18:51:50.512655 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 18:51:50.518410 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 18:51:50.521587 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 18:51:50.524477 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 18:51:50.544038 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 18:51:50.561121 kernel: loop0: detected capacity change from 0 to 113552 Feb 13 18:51:50.603595 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 18:51:50.616513 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 18:51:50.633886 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 18:51:50.670132 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 18:51:50.710491 systemd-journald[1501]: Time spent on flushing to /var/log/journal/ec2802a91879474fafd8dd4bdd0c1f95 is 70.778ms for 896 entries. Feb 13 18:51:50.710491 systemd-journald[1501]: System Journal (/var/log/journal/ec2802a91879474fafd8dd4bdd0c1f95) is 8.0M, max 195.6M, 187.6M free. Feb 13 18:51:50.809254 systemd-journald[1501]: Received client request to flush runtime journal. Feb 13 18:51:50.809328 kernel: loop1: detected capacity change from 0 to 116784 Feb 13 18:51:50.716877 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:51:50.727997 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 18:51:50.730570 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 18:51:50.742387 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 18:51:50.758936 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 18:51:50.777211 systemd-tmpfiles[1531]: ACLs are not supported, ignoring. Feb 13 18:51:50.777235 systemd-tmpfiles[1531]: ACLs are not supported, ignoring. Feb 13 18:51:50.799180 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 18:51:50.818305 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 18:51:50.821652 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 18:51:50.835844 udevadm[1563]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 18:51:50.848006 kernel: loop2: detected capacity change from 0 to 53784 Feb 13 18:51:50.913214 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 18:51:50.924443 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 18:51:50.976190 kernel: loop3: detected capacity change from 0 to 201592 Feb 13 18:51:50.998616 systemd-tmpfiles[1572]: ACLs are not supported, ignoring. Feb 13 18:51:50.998657 systemd-tmpfiles[1572]: ACLs are not supported, ignoring. Feb 13 18:51:51.016627 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 18:51:51.048135 kernel: loop4: detected capacity change from 0 to 113552 Feb 13 18:51:51.080520 kernel: loop5: detected capacity change from 0 to 116784 Feb 13 18:51:51.110153 kernel: loop6: detected capacity change from 0 to 53784 Feb 13 18:51:51.140328 kernel: loop7: detected capacity change from 0 to 201592 Feb 13 18:51:51.184761 (sd-merge)[1577]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 18:51:51.186811 (sd-merge)[1577]: Merged extensions into '/usr'. Feb 13 18:51:51.199995 systemd[1]: Reloading requested from client PID 1530 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 18:51:51.200389 systemd[1]: Reloading... Feb 13 18:51:51.406144 zram_generator::config[1603]: No configuration found. Feb 13 18:51:51.483323 ldconfig[1523]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 18:51:51.715296 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:51:51.845497 systemd[1]: Reloading finished in 644 ms. Feb 13 18:51:51.886184 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 18:51:51.889387 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 18:51:51.892740 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 18:51:51.909482 systemd[1]: Starting ensure-sysext.service... Feb 13 18:51:51.918606 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 18:51:51.925474 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 18:51:51.948835 systemd[1]: Reloading requested from client PID 1656 ('systemctl') (unit ensure-sysext.service)... Feb 13 18:51:51.948869 systemd[1]: Reloading... Feb 13 18:51:51.986445 systemd-tmpfiles[1657]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 18:51:51.986957 systemd-tmpfiles[1657]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 18:51:51.990086 systemd-tmpfiles[1657]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 18:51:51.990681 systemd-tmpfiles[1657]: ACLs are not supported, ignoring. Feb 13 18:51:51.990821 systemd-tmpfiles[1657]: ACLs are not supported, ignoring. Feb 13 18:51:52.003558 systemd-tmpfiles[1657]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 18:51:52.003762 systemd-tmpfiles[1657]: Skipping /boot Feb 13 18:51:52.051064 systemd-tmpfiles[1657]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 18:51:52.052219 systemd-tmpfiles[1657]: Skipping /boot Feb 13 18:51:52.077479 systemd-udevd[1658]: Using default interface naming scheme 'v255'. Feb 13 18:51:52.160180 zram_generator::config[1693]: No configuration found. Feb 13 18:51:52.449944 (udev-worker)[1707]: Network interface NamePolicy= disabled on kernel command line. Feb 13 18:51:52.537974 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:51:52.627127 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1737) Feb 13 18:51:52.701143 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 18:51:52.701666 systemd[1]: Reloading finished in 752 ms. Feb 13 18:51:52.782566 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 18:51:52.798221 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 18:51:52.857285 systemd[1]: Finished ensure-sysext.service. Feb 13 18:51:52.881428 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 18:51:52.893426 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 18:51:52.896001 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:51:52.906408 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:51:52.911729 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 18:51:52.918403 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:51:52.923762 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 18:51:52.926467 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:51:52.931429 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 18:51:52.940446 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 18:51:52.949555 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 18:51:52.952519 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 18:51:52.960315 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 18:51:52.966711 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:51:52.985372 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 18:51:52.986967 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 18:51:53.036993 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:51:53.039245 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:51:53.056681 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 18:51:53.090571 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 18:51:53.097950 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:51:53.098367 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:51:53.109471 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 18:51:53.113942 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 18:51:53.115231 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 18:51:53.145693 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 18:51:53.147957 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 18:51:53.149745 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 18:51:53.154423 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 18:51:53.179564 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 18:51:53.181791 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 18:51:53.184413 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 18:51:53.200456 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 18:51:53.203630 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 18:51:53.236161 lvm[1888]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 18:51:53.246785 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 18:51:53.263230 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 18:51:53.267735 augenrules[1897]: No rules Feb 13 18:51:53.269732 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 18:51:53.270233 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 18:51:53.309747 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 18:51:53.313438 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 18:51:53.321494 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 18:51:53.326967 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:51:53.337227 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 18:51:53.351120 lvm[1907]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 18:51:53.404895 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 18:51:53.456366 systemd-networkd[1860]: lo: Link UP Feb 13 18:51:53.456829 systemd-networkd[1860]: lo: Gained carrier Feb 13 18:51:53.459695 systemd-networkd[1860]: Enumeration completed Feb 13 18:51:53.461063 systemd-networkd[1860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:51:53.461162 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 18:51:53.465176 systemd-networkd[1860]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 18:51:53.468210 systemd-networkd[1860]: eth0: Link UP Feb 13 18:51:53.468778 systemd-networkd[1860]: eth0: Gained carrier Feb 13 18:51:53.468949 systemd-networkd[1860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:51:53.478459 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 18:51:53.483074 systemd-resolved[1861]: Positive Trust Anchors: Feb 13 18:51:53.483139 systemd-resolved[1861]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 18:51:53.483202 systemd-resolved[1861]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 18:51:53.484229 systemd-networkd[1860]: eth0: DHCPv4 address 172.31.25.248/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 18:51:53.492176 systemd-resolved[1861]: Defaulting to hostname 'linux'. Feb 13 18:51:53.495529 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 18:51:53.499848 systemd[1]: Reached target network.target - Network. Feb 13 18:51:53.501960 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 18:51:53.504221 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 18:51:53.506409 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 18:51:53.508884 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 18:51:53.511559 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 18:51:53.514554 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 18:51:53.516946 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 18:51:53.519309 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 18:51:53.519361 systemd[1]: Reached target paths.target - Path Units. Feb 13 18:51:53.521086 systemd[1]: Reached target timers.target - Timer Units. Feb 13 18:51:53.524669 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 18:51:53.529330 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 18:51:53.537387 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 18:51:53.540503 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 18:51:53.542797 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 18:51:53.544709 systemd[1]: Reached target basic.target - Basic System. Feb 13 18:51:53.546574 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 18:51:53.546626 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 18:51:53.565412 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 18:51:53.570599 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 18:51:53.576563 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 18:51:53.589547 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 18:51:53.596174 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 18:51:53.599492 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 18:51:53.604127 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 18:51:53.615062 jq[1923]: false Feb 13 18:51:53.615806 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 18:51:53.622129 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 18:51:53.628435 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 18:51:53.635517 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 18:51:53.660329 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 18:51:53.663349 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 18:51:53.666076 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 18:51:53.668318 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 18:51:53.673360 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 18:51:53.680906 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 18:51:53.684745 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 18:51:53.719961 dbus-daemon[1922]: [system] SELinux support is enabled Feb 13 18:51:53.720427 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 18:51:53.728659 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 18:51:53.728707 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 18:51:53.731330 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 18:51:53.731367 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 18:51:53.749649 dbus-daemon[1922]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1860 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 18:51:53.751059 dbus-daemon[1922]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 18:51:53.774390 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 18:51:53.777490 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 18:51:53.815542 jq[1932]: true Feb 13 18:51:53.780173 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 18:51:53.852132 extend-filesystems[1924]: Found loop4 Feb 13 18:51:53.852132 extend-filesystems[1924]: Found loop5 Feb 13 18:51:53.852132 extend-filesystems[1924]: Found loop6 Feb 13 18:51:53.852132 extend-filesystems[1924]: Found loop7 Feb 13 18:51:53.852132 extend-filesystems[1924]: Found nvme0n1 Feb 13 18:51:53.852132 extend-filesystems[1924]: Found nvme0n1p1 Feb 13 18:51:53.852132 extend-filesystems[1924]: Found nvme0n1p2 Feb 13 18:51:53.852132 extend-filesystems[1924]: Found nvme0n1p3 Feb 13 18:51:53.852132 extend-filesystems[1924]: Found usr Feb 13 18:51:53.852132 extend-filesystems[1924]: Found nvme0n1p4 Feb 13 18:51:53.887669 extend-filesystems[1924]: Found nvme0n1p6 Feb 13 18:51:53.887669 extend-filesystems[1924]: Found nvme0n1p7 Feb 13 18:51:53.887669 extend-filesystems[1924]: Found nvme0n1p9 Feb 13 18:51:53.887669 extend-filesystems[1924]: Checking size of /dev/nvme0n1p9 Feb 13 18:51:53.907715 jq[1945]: true Feb 13 18:51:53.926455 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:01:18 UTC 2025 (1): Starting Feb 13 18:51:53.926455 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 18:51:53.926455 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: ---------------------------------------------------- Feb 13 18:51:53.926455 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: ntp-4 is maintained by Network Time Foundation, Feb 13 18:51:53.926455 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 18:51:53.926455 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: corporation. Support and training for ntp-4 are Feb 13 18:51:53.926455 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: available at https://www.nwtime.org/support Feb 13 18:51:53.926455 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: ---------------------------------------------------- Feb 13 18:51:53.926455 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: proto: precision = 0.096 usec (-23) Feb 13 18:51:53.926455 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: basedate set to 2025-02-01 Feb 13 18:51:53.926455 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: gps base set to 2025-02-02 (week 2352) Feb 13 18:51:53.917906 ntpd[1926]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:01:18 UTC 2025 (1): Starting Feb 13 18:51:53.917952 ntpd[1926]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 18:51:53.917972 ntpd[1926]: ---------------------------------------------------- Feb 13 18:51:53.917990 ntpd[1926]: ntp-4 is maintained by Network Time Foundation, Feb 13 18:51:53.918008 ntpd[1926]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 18:51:53.918026 ntpd[1926]: corporation. Support and training for ntp-4 are Feb 13 18:51:53.918043 ntpd[1926]: available at https://www.nwtime.org/support Feb 13 18:51:53.918061 ntpd[1926]: ---------------------------------------------------- Feb 13 18:51:53.921683 ntpd[1926]: proto: precision = 0.096 usec (-23) Feb 13 18:51:53.923988 ntpd[1926]: basedate set to 2025-02-01 Feb 13 18:51:53.924019 ntpd[1926]: gps base set to 2025-02-02 (week 2352) Feb 13 18:51:53.931680 ntpd[1926]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 18:51:53.934321 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 18:51:53.934321 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 18:51:53.934321 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 18:51:53.934321 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: Listen normally on 3 eth0 172.31.25.248:123 Feb 13 18:51:53.934321 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: Listen normally on 4 lo [::1]:123 Feb 13 18:51:53.934321 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: bind(21) AF_INET6 fe80::468:84ff:fed7:e875%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 18:51:53.934321 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: unable to create socket on eth0 (5) for fe80::468:84ff:fed7:e875%2#123 Feb 13 18:51:53.934321 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: failed to init interface for address fe80::468:84ff:fed7:e875%2 Feb 13 18:51:53.934321 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: Listening on routing socket on fd #21 for interface updates Feb 13 18:51:53.931777 ntpd[1926]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 18:51:53.932048 ntpd[1926]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 18:51:53.932137 ntpd[1926]: Listen normally on 3 eth0 172.31.25.248:123 Feb 13 18:51:53.932207 ntpd[1926]: Listen normally on 4 lo [::1]:123 Feb 13 18:51:53.932282 ntpd[1926]: bind(21) AF_INET6 fe80::468:84ff:fed7:e875%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 18:51:53.932321 ntpd[1926]: unable to create socket on eth0 (5) for fe80::468:84ff:fed7:e875%2#123 Feb 13 18:51:53.932351 ntpd[1926]: failed to init interface for address fe80::468:84ff:fed7:e875%2 Feb 13 18:51:53.932412 ntpd[1926]: Listening on routing socket on fd #21 for interface updates Feb 13 18:51:53.941351 ntpd[1926]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 18:51:53.945120 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 18:51:53.945120 ntpd[1926]: 13 Feb 18:51:53 ntpd[1926]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 18:51:53.943185 ntpd[1926]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 18:51:53.947315 (ntainerd)[1954]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 18:51:53.957673 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 18:51:53.958032 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 18:51:53.985534 update_engine[1931]: I20250213 18:51:53.983038 1931 main.cc:92] Flatcar Update Engine starting Feb 13 18:51:53.996513 extend-filesystems[1924]: Resized partition /dev/nvme0n1p9 Feb 13 18:51:54.001433 extend-filesystems[1973]: resize2fs 1.47.1 (20-May-2024) Feb 13 18:51:54.004060 systemd[1]: Started update-engine.service - Update Engine. Feb 13 18:51:54.009419 update_engine[1931]: I20250213 18:51:54.004325 1931 update_check_scheduler.cc:74] Next update check in 9m16s Feb 13 18:51:54.022488 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 18:51:54.015981 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 18:51:54.144555 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 18:51:54.146379 systemd-logind[1930]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 18:51:54.146414 systemd-logind[1930]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 18:51:54.148078 systemd-logind[1930]: New seat seat0. Feb 13 18:51:54.152820 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 18:51:54.185968 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 18:51:54.205349 extend-filesystems[1973]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 18:51:54.205349 extend-filesystems[1973]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 18:51:54.205349 extend-filesystems[1973]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 18:51:54.223696 extend-filesystems[1924]: Resized filesystem in /dev/nvme0n1p9 Feb 13 18:51:54.219848 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 18:51:54.221217 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 18:51:54.238824 bash[1985]: Updated "/home/core/.ssh/authorized_keys" Feb 13 18:51:54.240122 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1699) Feb 13 18:51:54.267019 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 18:51:54.276441 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 18:51:54.278604 coreos-metadata[1921]: Feb 13 18:51:54.278 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 18:51:54.284625 systemd[1]: Starting sshkeys.service... Feb 13 18:51:54.293537 coreos-metadata[1921]: Feb 13 18:51:54.293 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 18:51:54.295499 coreos-metadata[1921]: Feb 13 18:51:54.294 INFO Fetch successful Feb 13 18:51:54.295499 coreos-metadata[1921]: Feb 13 18:51:54.295 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 18:51:54.297698 coreos-metadata[1921]: Feb 13 18:51:54.297 INFO Fetch successful Feb 13 18:51:54.297698 coreos-metadata[1921]: Feb 13 18:51:54.297 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 18:51:54.300212 coreos-metadata[1921]: Feb 13 18:51:54.300 INFO Fetch successful Feb 13 18:51:54.300212 coreos-metadata[1921]: Feb 13 18:51:54.300 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 18:51:54.302784 coreos-metadata[1921]: Feb 13 18:51:54.302 INFO Fetch successful Feb 13 18:51:54.302784 coreos-metadata[1921]: Feb 13 18:51:54.302 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 18:51:54.304076 coreos-metadata[1921]: Feb 13 18:51:54.303 INFO Fetch failed with 404: resource not found Feb 13 18:51:54.304076 coreos-metadata[1921]: Feb 13 18:51:54.303 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 18:51:54.304076 coreos-metadata[1921]: Feb 13 18:51:54.304 INFO Fetch successful Feb 13 18:51:54.304076 coreos-metadata[1921]: Feb 13 18:51:54.304 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 18:51:54.313069 coreos-metadata[1921]: Feb 13 18:51:54.312 INFO Fetch successful Feb 13 18:51:54.313069 coreos-metadata[1921]: Feb 13 18:51:54.313 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 18:51:54.315743 coreos-metadata[1921]: Feb 13 18:51:54.315 INFO Fetch successful Feb 13 18:51:54.315743 coreos-metadata[1921]: Feb 13 18:51:54.315 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 18:51:54.316499 coreos-metadata[1921]: Feb 13 18:51:54.316 INFO Fetch successful Feb 13 18:51:54.316499 coreos-metadata[1921]: Feb 13 18:51:54.316 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 18:51:54.317552 coreos-metadata[1921]: Feb 13 18:51:54.317 INFO Fetch successful Feb 13 18:51:54.423572 dbus-daemon[1922]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 18:51:54.426913 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 18:51:54.429256 dbus-daemon[1922]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1941 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 18:51:54.429908 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 18:51:54.441765 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 18:51:54.484934 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 18:51:54.488045 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 18:51:54.491875 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 18:51:54.546280 containerd[1954]: time="2025-02-13T18:51:54.544156820Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 18:51:54.590833 polkitd[2056]: Started polkitd version 121 Feb 13 18:51:54.624876 polkitd[2056]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 18:51:54.625030 polkitd[2056]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 18:51:54.638413 polkitd[2056]: Finished loading, compiling and executing 2 rules Feb 13 18:51:54.646376 dbus-daemon[1922]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 18:51:54.652794 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 18:51:54.666168 polkitd[2056]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 18:51:54.667380 systemd-networkd[1860]: eth0: Gained IPv6LL Feb 13 18:51:54.678907 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 18:51:54.683679 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 18:51:54.693862 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 18:51:54.701678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:51:54.709535 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 18:51:54.741831 locksmithd[1977]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 18:51:54.778667 systemd-hostnamed[1941]: Hostname set to (transient) Feb 13 18:51:54.784175 systemd-resolved[1861]: System hostname changed to 'ip-172-31-25-248'. Feb 13 18:51:54.800293 coreos-metadata[2039]: Feb 13 18:51:54.798 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 18:51:54.807769 coreos-metadata[2039]: Feb 13 18:51:54.803 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 18:51:54.807769 coreos-metadata[2039]: Feb 13 18:51:54.806 INFO Fetch successful Feb 13 18:51:54.807769 coreos-metadata[2039]: Feb 13 18:51:54.806 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 18:51:54.807769 coreos-metadata[2039]: Feb 13 18:51:54.807 INFO Fetch successful Feb 13 18:51:54.819411 unknown[2039]: wrote ssh authorized keys file for user: core Feb 13 18:51:54.829059 containerd[1954]: time="2025-02-13T18:51:54.827979226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:51:54.858556 containerd[1954]: time="2025-02-13T18:51:54.858474346Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:51:54.858556 containerd[1954]: time="2025-02-13T18:51:54.858545494Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 18:51:54.858706 containerd[1954]: time="2025-02-13T18:51:54.858586618Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 18:51:54.858969 containerd[1954]: time="2025-02-13T18:51:54.858903802Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 18:51:54.859053 containerd[1954]: time="2025-02-13T18:51:54.858967138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 18:51:54.859238 containerd[1954]: time="2025-02-13T18:51:54.859185166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:51:54.859324 containerd[1954]: time="2025-02-13T18:51:54.859232842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:51:54.864470 containerd[1954]: time="2025-02-13T18:51:54.859610410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:51:54.864470 containerd[1954]: time="2025-02-13T18:51:54.859666654Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 18:51:54.864470 containerd[1954]: time="2025-02-13T18:51:54.859708606Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:51:54.864470 containerd[1954]: time="2025-02-13T18:51:54.859754662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 18:51:54.864470 containerd[1954]: time="2025-02-13T18:51:54.859976746Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:51:54.864983 containerd[1954]: time="2025-02-13T18:51:54.864905602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:51:54.865312 containerd[1954]: time="2025-02-13T18:51:54.865248406Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:51:54.865312 containerd[1954]: time="2025-02-13T18:51:54.865306078Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 18:51:54.865613 containerd[1954]: time="2025-02-13T18:51:54.865557106Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 18:51:54.865742 containerd[1954]: time="2025-02-13T18:51:54.865703170Z" level=info msg="metadata content store policy set" policy=shared Feb 13 18:51:54.882836 containerd[1954]: time="2025-02-13T18:51:54.882731158Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 18:51:54.884368 containerd[1954]: time="2025-02-13T18:51:54.882947266Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 18:51:54.884368 containerd[1954]: time="2025-02-13T18:51:54.883023346Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 18:51:54.884368 containerd[1954]: time="2025-02-13T18:51:54.883081198Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 18:51:54.884368 containerd[1954]: time="2025-02-13T18:51:54.883581538Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 18:51:54.885744 containerd[1954]: time="2025-02-13T18:51:54.884302054Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 18:51:54.900124 containerd[1954]: time="2025-02-13T18:51:54.888991186Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 18:51:54.900124 containerd[1954]: time="2025-02-13T18:51:54.889532614Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 18:51:54.900124 containerd[1954]: time="2025-02-13T18:51:54.889582954Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 18:51:54.900124 containerd[1954]: time="2025-02-13T18:51:54.889644238Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 18:51:54.900124 containerd[1954]: time="2025-02-13T18:51:54.889681822Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 18:51:54.900124 containerd[1954]: time="2025-02-13T18:51:54.889717198Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 18:51:54.900124 containerd[1954]: time="2025-02-13T18:51:54.889748746Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 18:51:54.900124 containerd[1954]: time="2025-02-13T18:51:54.889786126Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 18:51:54.900124 containerd[1954]: time="2025-02-13T18:51:54.889819150Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 18:51:54.900124 containerd[1954]: time="2025-02-13T18:51:54.889850254Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 18:51:54.900124 containerd[1954]: time="2025-02-13T18:51:54.889878814Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 18:51:54.900124 containerd[1954]: time="2025-02-13T18:51:54.889909210Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 18:51:54.900124 containerd[1954]: time="2025-02-13T18:51:54.889960462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 18:51:54.900124 containerd[1954]: time="2025-02-13T18:51:54.889993174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 18:51:54.900798 update-ssh-keys[2117]: Updated "/home/core/.ssh/authorized_keys" Feb 13 18:51:54.901236 containerd[1954]: time="2025-02-13T18:51:54.890023894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 18:51:54.901236 containerd[1954]: time="2025-02-13T18:51:54.890055982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 18:51:54.901236 containerd[1954]: time="2025-02-13T18:51:54.891833578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 18:51:54.901236 containerd[1954]: time="2025-02-13T18:51:54.891936982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 18:51:54.901236 containerd[1954]: time="2025-02-13T18:51:54.892005082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 18:51:54.901236 containerd[1954]: time="2025-02-13T18:51:54.892043038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 18:51:54.901236 containerd[1954]: time="2025-02-13T18:51:54.892139722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 18:51:54.901236 containerd[1954]: time="2025-02-13T18:51:54.893206342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 18:51:54.901236 containerd[1954]: time="2025-02-13T18:51:54.893297494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 18:51:54.901236 containerd[1954]: time="2025-02-13T18:51:54.893358142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 18:51:54.901236 containerd[1954]: time="2025-02-13T18:51:54.893391574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 18:51:54.901236 containerd[1954]: time="2025-02-13T18:51:54.893464246Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 18:51:54.901236 containerd[1954]: time="2025-02-13T18:51:54.893543914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 18:51:54.901236 containerd[1954]: time="2025-02-13T18:51:54.893617450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 18:51:54.901236 containerd[1954]: time="2025-02-13T18:51:54.893649514Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 18:51:54.901894 containerd[1954]: time="2025-02-13T18:51:54.894215158Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 18:51:54.901894 containerd[1954]: time="2025-02-13T18:51:54.894597754Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 18:51:54.901894 containerd[1954]: time="2025-02-13T18:51:54.894639526Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 18:51:54.901894 containerd[1954]: time="2025-02-13T18:51:54.894855154Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 18:51:54.901894 containerd[1954]: time="2025-02-13T18:51:54.895994770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 18:51:54.901894 containerd[1954]: time="2025-02-13T18:51:54.898072966Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 18:51:54.901894 containerd[1954]: time="2025-02-13T18:51:54.898167418Z" level=info msg="NRI interface is disabled by configuration." Feb 13 18:51:54.901894 containerd[1954]: time="2025-02-13T18:51:54.898225342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 18:51:54.902195 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 18:51:54.921904 containerd[1954]: time="2025-02-13T18:51:54.900451234Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 18:51:54.921904 containerd[1954]: time="2025-02-13T18:51:54.901788046Z" level=info msg="Connect containerd service" Feb 13 18:51:54.921904 containerd[1954]: time="2025-02-13T18:51:54.901905034Z" level=info msg="using legacy CRI server" Feb 13 18:51:54.921904 containerd[1954]: time="2025-02-13T18:51:54.905213866Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 18:51:54.921904 containerd[1954]: time="2025-02-13T18:51:54.905682730Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 18:51:54.921904 containerd[1954]: time="2025-02-13T18:51:54.910058566Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 18:51:54.921904 containerd[1954]: time="2025-02-13T18:51:54.910506514Z" level=info msg="Start subscribing containerd event" Feb 13 18:51:54.921904 containerd[1954]: time="2025-02-13T18:51:54.910838830Z" level=info msg="Start recovering state" Feb 13 18:51:54.921904 containerd[1954]: time="2025-02-13T18:51:54.913537714Z" level=info msg="Start event monitor" Feb 13 18:51:54.921904 containerd[1954]: time="2025-02-13T18:51:54.913634386Z" level=info msg="Start snapshots syncer" Feb 13 18:51:54.921904 containerd[1954]: time="2025-02-13T18:51:54.913661602Z" level=info msg="Start cni network conf syncer for default" Feb 13 18:51:54.921904 containerd[1954]: time="2025-02-13T18:51:54.913682158Z" level=info msg="Start streaming server" Feb 13 18:51:54.921904 containerd[1954]: time="2025-02-13T18:51:54.915406510Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 18:51:54.921904 containerd[1954]: time="2025-02-13T18:51:54.915610414Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 18:51:54.921904 containerd[1954]: time="2025-02-13T18:51:54.915776890Z" level=info msg="containerd successfully booted in 0.377738s" Feb 13 18:51:54.917714 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 18:51:54.924056 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 18:51:54.940242 systemd[1]: Finished sshkeys.service. Feb 13 18:51:55.025374 amazon-ssm-agent[2100]: Initializing new seelog logger Feb 13 18:51:55.028801 amazon-ssm-agent[2100]: New Seelog Logger Creation Complete Feb 13 18:51:55.028801 amazon-ssm-agent[2100]: 2025/02/13 18:51:55 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 18:51:55.028801 amazon-ssm-agent[2100]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 18:51:55.029298 amazon-ssm-agent[2100]: 2025/02/13 18:51:55 processing appconfig overrides Feb 13 18:51:55.029765 amazon-ssm-agent[2100]: 2025/02/13 18:51:55 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 18:51:55.029765 amazon-ssm-agent[2100]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 18:51:55.029911 amazon-ssm-agent[2100]: 2025/02/13 18:51:55 processing appconfig overrides Feb 13 18:51:55.031309 amazon-ssm-agent[2100]: 2025/02/13 18:51:55 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 18:51:55.031309 amazon-ssm-agent[2100]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 18:51:55.031309 amazon-ssm-agent[2100]: 2025/02/13 18:51:55 processing appconfig overrides Feb 13 18:51:55.031309 amazon-ssm-agent[2100]: 2025-02-13 18:51:55 INFO Proxy environment variables: Feb 13 18:51:55.038161 amazon-ssm-agent[2100]: 2025/02/13 18:51:55 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 18:51:55.038161 amazon-ssm-agent[2100]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 18:51:55.038360 amazon-ssm-agent[2100]: 2025/02/13 18:51:55 processing appconfig overrides Feb 13 18:51:55.133787 amazon-ssm-agent[2100]: 2025-02-13 18:51:55 INFO http_proxy: Feb 13 18:51:55.234225 amazon-ssm-agent[2100]: 2025-02-13 18:51:55 INFO no_proxy: Feb 13 18:51:55.335183 amazon-ssm-agent[2100]: 2025-02-13 18:51:55 INFO https_proxy: Feb 13 18:51:55.432739 amazon-ssm-agent[2100]: 2025-02-13 18:51:55 INFO Checking if agent identity type OnPrem can be assumed Feb 13 18:51:55.532201 amazon-ssm-agent[2100]: 2025-02-13 18:51:55 INFO Checking if agent identity type EC2 can be assumed Feb 13 18:51:55.631131 amazon-ssm-agent[2100]: 2025-02-13 18:51:55 INFO Agent will take identity from EC2 Feb 13 18:51:55.729883 amazon-ssm-agent[2100]: 2025-02-13 18:51:55 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 18:51:55.829377 amazon-ssm-agent[2100]: 2025-02-13 18:51:55 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 18:51:55.929120 amazon-ssm-agent[2100]: 2025-02-13 18:51:55 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 18:51:56.028222 amazon-ssm-agent[2100]: 2025-02-13 18:51:55 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 18:51:56.129378 amazon-ssm-agent[2100]: 2025-02-13 18:51:55 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 18:51:56.229721 amazon-ssm-agent[2100]: 2025-02-13 18:51:55 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 18:51:56.331124 amazon-ssm-agent[2100]: 2025-02-13 18:51:55 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 18:51:56.430182 amazon-ssm-agent[2100]: 2025-02-13 18:51:55 INFO [Registrar] Starting registrar module Feb 13 18:51:56.476406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:51:56.480471 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 18:51:56.530545 amazon-ssm-agent[2100]: 2025-02-13 18:51:55 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 18:51:56.537979 amazon-ssm-agent[2100]: 2025-02-13 18:51:56 INFO [EC2Identity] EC2 registration was successful. Feb 13 18:51:56.538245 amazon-ssm-agent[2100]: 2025-02-13 18:51:56 INFO [CredentialRefresher] credentialRefresher has started Feb 13 18:51:56.539347 amazon-ssm-agent[2100]: 2025-02-13 18:51:56 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 18:51:56.539347 amazon-ssm-agent[2100]: 2025-02-13 18:51:56 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 18:51:56.629024 sshd_keygen[1949]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 18:51:56.630843 amazon-ssm-agent[2100]: 2025-02-13 18:51:56 INFO [CredentialRefresher] Next credential rotation will be in 31.508300143633335 minutes Feb 13 18:51:56.674328 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 18:51:56.686109 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 18:51:56.698702 systemd[1]: Started sshd@0-172.31.25.248:22-139.178.68.195:32862.service - OpenSSH per-connection server daemon (139.178.68.195:32862). Feb 13 18:51:56.702508 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 18:51:56.702830 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 18:51:56.717642 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 18:51:56.747746 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 18:51:56.762599 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 18:51:56.767492 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 18:51:56.772666 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 18:51:56.774710 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 18:51:56.782218 systemd[1]: Startup finished in 1.113s (kernel) + 7.032s (initrd) + 8.015s (userspace) = 16.162s. Feb 13 18:51:56.810370 agetty[2168]: failed to open credentials directory Feb 13 18:51:56.825445 agetty[2167]: failed to open credentials directory Feb 13 18:51:56.918703 ntpd[1926]: Listen normally on 6 eth0 [fe80::468:84ff:fed7:e875%2]:123 Feb 13 18:51:56.920219 ntpd[1926]: 13 Feb 18:51:56 ntpd[1926]: Listen normally on 6 eth0 [fe80::468:84ff:fed7:e875%2]:123 Feb 13 18:51:56.972361 sshd[2161]: Accepted publickey for core from 139.178.68.195 port 32862 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s Feb 13 18:51:56.977135 sshd-session[2161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:51:56.999230 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 18:51:57.006381 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 18:51:57.013267 systemd-logind[1930]: New session 1 of user core. Feb 13 18:51:57.040917 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 18:51:57.051795 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 18:51:57.064720 (systemd)[2176]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 18:51:57.302227 systemd[2176]: Queued start job for default target default.target. Feb 13 18:51:57.309902 systemd[2176]: Created slice app.slice - User Application Slice. Feb 13 18:51:57.309970 systemd[2176]: Reached target paths.target - Paths. Feb 13 18:51:57.310002 systemd[2176]: Reached target timers.target - Timers. Feb 13 18:51:57.314354 systemd[2176]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 18:51:57.336861 systemd[2176]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 18:51:57.337142 systemd[2176]: Reached target sockets.target - Sockets. Feb 13 18:51:57.337189 systemd[2176]: Reached target basic.target - Basic System. Feb 13 18:51:57.337280 systemd[2176]: Reached target default.target - Main User Target. Feb 13 18:51:57.337341 systemd[2176]: Startup finished in 259ms. Feb 13 18:51:57.337489 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 18:51:57.346403 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 18:51:57.423242 kubelet[2147]: E0213 18:51:57.423172 2147 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 18:51:57.428370 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 18:51:57.428710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 18:51:57.429347 systemd[1]: kubelet.service: Consumed 1.323s CPU time. Feb 13 18:51:57.500673 systemd[1]: Started sshd@1-172.31.25.248:22-139.178.68.195:40374.service - OpenSSH per-connection server daemon (139.178.68.195:40374). Feb 13 18:51:57.568329 amazon-ssm-agent[2100]: 2025-02-13 18:51:57 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 18:51:57.669620 amazon-ssm-agent[2100]: 2025-02-13 18:51:57 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2192) started Feb 13 18:51:57.708255 sshd[2189]: Accepted publickey for core from 139.178.68.195 port 40374 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s Feb 13 18:51:57.715196 sshd-session[2189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:51:57.723187 systemd-logind[1930]: New session 2 of user core. Feb 13 18:51:57.735440 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 18:51:57.769744 amazon-ssm-agent[2100]: 2025-02-13 18:51:57 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 18:51:57.865851 sshd[2200]: Connection closed by 139.178.68.195 port 40374 Feb 13 18:51:57.867005 sshd-session[2189]: pam_unix(sshd:session): session closed for user core Feb 13 18:51:57.873439 systemd[1]: sshd@1-172.31.25.248:22-139.178.68.195:40374.service: Deactivated successfully. Feb 13 18:51:57.877455 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 18:51:57.878704 systemd-logind[1930]: Session 2 logged out. Waiting for processes to exit. Feb 13 18:51:57.881121 systemd-logind[1930]: Removed session 2. Feb 13 18:51:57.911820 systemd[1]: Started sshd@2-172.31.25.248:22-139.178.68.195:40386.service - OpenSSH per-connection server daemon (139.178.68.195:40386). Feb 13 18:51:58.092147 sshd[2207]: Accepted publickey for core from 139.178.68.195 port 40386 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s Feb 13 18:51:58.094552 sshd-session[2207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:51:58.102215 systemd-logind[1930]: New session 3 of user core. Feb 13 18:51:58.114383 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 18:51:58.234649 sshd[2209]: Connection closed by 139.178.68.195 port 40386 Feb 13 18:51:58.233656 sshd-session[2207]: pam_unix(sshd:session): session closed for user core Feb 13 18:51:58.238725 systemd[1]: sshd@2-172.31.25.248:22-139.178.68.195:40386.service: Deactivated successfully. Feb 13 18:51:58.241704 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 18:51:58.245186 systemd-logind[1930]: Session 3 logged out. Waiting for processes to exit. Feb 13 18:51:58.247059 systemd-logind[1930]: Removed session 3. Feb 13 18:51:58.272576 systemd[1]: Started sshd@3-172.31.25.248:22-139.178.68.195:40398.service - OpenSSH per-connection server daemon (139.178.68.195:40398). Feb 13 18:51:58.450691 sshd[2214]: Accepted publickey for core from 139.178.68.195 port 40398 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s Feb 13 18:51:58.453134 sshd-session[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:51:58.460438 systemd-logind[1930]: New session 4 of user core. Feb 13 18:51:58.472353 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 18:51:58.598715 sshd[2216]: Connection closed by 139.178.68.195 port 40398 Feb 13 18:51:58.598520 sshd-session[2214]: pam_unix(sshd:session): session closed for user core Feb 13 18:51:58.603106 systemd-logind[1930]: Session 4 logged out. Waiting for processes to exit. Feb 13 18:51:58.604417 systemd[1]: sshd@3-172.31.25.248:22-139.178.68.195:40398.service: Deactivated successfully. Feb 13 18:51:58.607132 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 18:51:58.610329 systemd-logind[1930]: Removed session 4. Feb 13 18:51:58.642563 systemd[1]: Started sshd@4-172.31.25.248:22-139.178.68.195:40402.service - OpenSSH per-connection server daemon (139.178.68.195:40402). Feb 13 18:51:58.829171 sshd[2221]: Accepted publickey for core from 139.178.68.195 port 40402 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s Feb 13 18:51:58.831523 sshd-session[2221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:51:58.839588 systemd-logind[1930]: New session 5 of user core. Feb 13 18:51:58.849394 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 18:51:58.966260 sudo[2224]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 18:51:58.966862 sudo[2224]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 18:51:58.983691 sudo[2224]: pam_unix(sudo:session): session closed for user root Feb 13 18:51:59.007151 sshd[2223]: Connection closed by 139.178.68.195 port 40402 Feb 13 18:51:59.008219 sshd-session[2221]: pam_unix(sshd:session): session closed for user core Feb 13 18:51:59.013506 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 18:51:59.015976 systemd[1]: sshd@4-172.31.25.248:22-139.178.68.195:40402.service: Deactivated successfully. Feb 13 18:51:59.020375 systemd-logind[1930]: Session 5 logged out. Waiting for processes to exit. Feb 13 18:51:59.022180 systemd-logind[1930]: Removed session 5. Feb 13 18:51:59.047629 systemd[1]: Started sshd@5-172.31.25.248:22-139.178.68.195:40414.service - OpenSSH per-connection server daemon (139.178.68.195:40414). Feb 13 18:51:59.234431 sshd[2229]: Accepted publickey for core from 139.178.68.195 port 40414 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s Feb 13 18:51:59.236893 sshd-session[2229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:51:59.244816 systemd-logind[1930]: New session 6 of user core. Feb 13 18:51:59.255349 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 18:51:59.359386 sudo[2233]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 18:51:59.360553 sudo[2233]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 18:51:59.366862 sudo[2233]: pam_unix(sudo:session): session closed for user root Feb 13 18:51:59.376704 sudo[2232]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 18:51:59.377346 sudo[2232]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 18:51:59.401687 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 18:51:59.449214 augenrules[2255]: No rules Feb 13 18:51:59.450719 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 18:51:59.452184 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 18:51:59.454046 sudo[2232]: pam_unix(sudo:session): session closed for user root Feb 13 18:51:59.477943 sshd[2231]: Connection closed by 139.178.68.195 port 40414 Feb 13 18:51:59.478811 sshd-session[2229]: pam_unix(sshd:session): session closed for user core Feb 13 18:51:59.485233 systemd[1]: sshd@5-172.31.25.248:22-139.178.68.195:40414.service: Deactivated successfully. Feb 13 18:51:59.485573 systemd-logind[1930]: Session 6 logged out. Waiting for processes to exit. Feb 13 18:51:59.489530 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 18:51:59.491402 systemd-logind[1930]: Removed session 6. Feb 13 18:51:59.522597 systemd[1]: Started sshd@6-172.31.25.248:22-139.178.68.195:40426.service - OpenSSH per-connection server daemon (139.178.68.195:40426). Feb 13 18:51:59.704536 sshd[2263]: Accepted publickey for core from 139.178.68.195 port 40426 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s Feb 13 18:51:59.706885 sshd-session[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:51:59.715223 systemd-logind[1930]: New session 7 of user core. Feb 13 18:51:59.722346 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 18:51:59.826541 sudo[2266]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 18:51:59.827823 sudo[2266]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 18:52:00.793450 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:52:00.794426 systemd[1]: kubelet.service: Consumed 1.323s CPU time. Feb 13 18:52:00.805563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:52:00.859423 systemd[1]: Reloading requested from client PID 2299 ('systemctl') (unit session-7.scope)... Feb 13 18:52:00.859635 systemd[1]: Reloading... Feb 13 18:52:01.076168 zram_generator::config[2339]: No configuration found. Feb 13 18:52:01.328947 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:52:01.504589 systemd[1]: Reloading finished in 644 ms. Feb 13 18:52:01.601634 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 18:52:01.601841 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 18:52:01.602426 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:52:01.614724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:52:01.933364 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:52:01.937136 (kubelet)[2403]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 18:52:02.016268 kubelet[2403]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 18:52:02.016268 kubelet[2403]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 18:52:02.016268 kubelet[2403]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 18:52:02.016787 kubelet[2403]: I0213 18:52:02.016393 2403 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 18:52:03.484359 kubelet[2403]: I0213 18:52:03.484290 2403 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 18:52:03.484359 kubelet[2403]: I0213 18:52:03.484345 2403 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 18:52:03.485179 kubelet[2403]: I0213 18:52:03.485137 2403 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 18:52:03.536700 kubelet[2403]: I0213 18:52:03.536415 2403 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 18:52:03.546532 kubelet[2403]: E0213 18:52:03.546486 2403 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 18:52:03.546702 kubelet[2403]: I0213 18:52:03.546681 2403 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 18:52:03.551256 kubelet[2403]: I0213 18:52:03.551221 2403 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 18:52:03.551855 kubelet[2403]: I0213 18:52:03.551811 2403 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 18:52:03.552297 kubelet[2403]: I0213 18:52:03.551975 2403 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.25.248","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 18:52:03.552542 kubelet[2403]: I0213 18:52:03.552521 2403 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 18:52:03.552635 kubelet[2403]: I0213 18:52:03.552618 2403 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 18:52:03.552948 kubelet[2403]: I0213 18:52:03.552930 2403 state_mem.go:36] "Initialized new in-memory state store" Feb 13 18:52:03.558615 kubelet[2403]: I0213 18:52:03.558577 2403 kubelet.go:446] "Attempting to sync node with API server" Feb 13 18:52:03.558803 kubelet[2403]: I0213 18:52:03.558782 2403 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 18:52:03.558913 kubelet[2403]: I0213 18:52:03.558895 2403 kubelet.go:352] "Adding apiserver pod source" Feb 13 18:52:03.559238 kubelet[2403]: I0213 18:52:03.559016 2403 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 18:52:03.564272 kubelet[2403]: E0213 18:52:03.564219 2403 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:03.564496 kubelet[2403]: E0213 18:52:03.564473 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:03.567746 kubelet[2403]: I0213 18:52:03.567660 2403 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 18:52:03.568548 kubelet[2403]: I0213 18:52:03.568494 2403 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 18:52:03.568664 kubelet[2403]: W0213 18:52:03.568626 2403 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 18:52:03.569837 kubelet[2403]: I0213 18:52:03.569786 2403 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 18:52:03.569933 kubelet[2403]: I0213 18:52:03.569847 2403 server.go:1287] "Started kubelet" Feb 13 18:52:03.572104 kubelet[2403]: I0213 18:52:03.572027 2403 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 18:52:03.574258 kubelet[2403]: I0213 18:52:03.573826 2403 server.go:490] "Adding debug handlers to kubelet server" Feb 13 18:52:03.579131 kubelet[2403]: I0213 18:52:03.577980 2403 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 18:52:03.579131 kubelet[2403]: I0213 18:52:03.578495 2403 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 18:52:03.580361 kubelet[2403]: I0213 18:52:03.580313 2403 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 18:52:03.590002 kubelet[2403]: I0213 18:52:03.589915 2403 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 18:52:03.596833 kubelet[2403]: I0213 18:52:03.596772 2403 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 18:52:03.597226 kubelet[2403]: E0213 18:52:03.597182 2403 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.25.248\" not found" Feb 13 18:52:03.598588 kubelet[2403]: I0213 18:52:03.598527 2403 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 18:52:03.598891 kubelet[2403]: I0213 18:52:03.598732 2403 reconciler.go:26] "Reconciler: start to sync state" Feb 13 18:52:03.602855 kubelet[2403]: I0213 18:52:03.602785 2403 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 18:52:03.606475 kubelet[2403]: I0213 18:52:03.606154 2403 factory.go:221] Registration of the containerd container factory successfully Feb 13 18:52:03.606475 kubelet[2403]: I0213 18:52:03.606215 2403 factory.go:221] Registration of the systemd container factory successfully Feb 13 18:52:03.628156 kubelet[2403]: E0213 18:52:03.627136 2403 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 18:52:03.628156 kubelet[2403]: W0213 18:52:03.627895 2403 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 18:52:03.628156 kubelet[2403]: E0213 18:52:03.627964 2403 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 18:52:03.636307 kubelet[2403]: E0213 18:52:03.636006 2403 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.248.1823d939d8dc3896 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.248,UID:172.31.25.248,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.25.248,},FirstTimestamp:2025-02-13 18:52:03.56981775 +0000 UTC m=+1.625528895,LastTimestamp:2025-02-13 18:52:03.56981775 +0000 UTC m=+1.625528895,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.248,}" Feb 13 18:52:03.637009 kubelet[2403]: W0213 18:52:03.636958 2403 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.25.248" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 18:52:03.637289 kubelet[2403]: E0213 18:52:03.637252 2403 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.25.248\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 18:52:03.637708 kubelet[2403]: W0213 18:52:03.637661 2403 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 18:52:03.637881 kubelet[2403]: E0213 18:52:03.637846 2403 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 18:52:03.650708 kubelet[2403]: I0213 18:52:03.650671 2403 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 18:52:03.651203 kubelet[2403]: I0213 18:52:03.650842 2403 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 18:52:03.651203 kubelet[2403]: I0213 18:52:03.650878 2403 state_mem.go:36] "Initialized new in-memory state store" Feb 13 18:52:03.655475 kubelet[2403]: I0213 18:52:03.655425 2403 policy_none.go:49] "None policy: Start" Feb 13 18:52:03.655475 kubelet[2403]: I0213 18:52:03.655471 2403 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 18:52:03.655630 kubelet[2403]: I0213 18:52:03.655496 2403 state_mem.go:35] "Initializing new in-memory state store" Feb 13 18:52:03.670565 kubelet[2403]: E0213 18:52:03.670491 2403 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.25.248\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 18:52:03.670939 kubelet[2403]: E0213 18:52:03.670371 2403 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.248.1823d939dc45d206 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.248,UID:172.31.25.248,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.25.248,},FirstTimestamp:2025-02-13 18:52:03.627069958 +0000 UTC m=+1.682781091,LastTimestamp:2025-02-13 18:52:03.627069958 +0000 UTC m=+1.682781091,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.248,}" Feb 13 18:52:03.675909 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 18:52:03.697904 kubelet[2403]: E0213 18:52:03.697840 2403 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.25.248\" not found" Feb 13 18:52:03.699598 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 18:52:03.703275 kubelet[2403]: E0213 18:52:03.702627 2403 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.248.1823d939dd92e15b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.248,UID:172.31.25.248,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.25.248 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.25.248,},FirstTimestamp:2025-02-13 18:52:03.648897371 +0000 UTC m=+1.704608504,LastTimestamp:2025-02-13 18:52:03.648897371 +0000 UTC m=+1.704608504,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.248,}" Feb 13 18:52:03.708764 kubelet[2403]: I0213 18:52:03.708450 2403 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 18:52:03.711839 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 18:52:03.715070 kubelet[2403]: I0213 18:52:03.715015 2403 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 18:52:03.715446 kubelet[2403]: I0213 18:52:03.715085 2403 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 18:52:03.715512 kubelet[2403]: I0213 18:52:03.715460 2403 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 18:52:03.715512 kubelet[2403]: I0213 18:52:03.715477 2403 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 18:52:03.715631 kubelet[2403]: E0213 18:52:03.715540 2403 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 18:52:03.726164 kubelet[2403]: I0213 18:52:03.725541 2403 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 18:52:03.726164 kubelet[2403]: I0213 18:52:03.725846 2403 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 18:52:03.726164 kubelet[2403]: I0213 18:52:03.725866 2403 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 18:52:03.727002 kubelet[2403]: I0213 18:52:03.726954 2403 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 18:52:03.730321 kubelet[2403]: E0213 18:52:03.730284 2403 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 18:52:03.730499 kubelet[2403]: E0213 18:52:03.730478 2403 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.25.248\" not found" Feb 13 18:52:03.736216 kubelet[2403]: E0213 18:52:03.734909 2403 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.248.1823d939dd9317a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.248,UID:172.31.25.248,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 172.31.25.248 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:172.31.25.248,},FirstTimestamp:2025-02-13 18:52:03.648911272 +0000 UTC m=+1.704622405,LastTimestamp:2025-02-13 18:52:03.648911272 +0000 UTC m=+1.704622405,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.248,}" Feb 13 18:52:03.736216 kubelet[2403]: W0213 18:52:03.735600 2403 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 13 18:52:03.736216 kubelet[2403]: E0213 18:52:03.735642 2403 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 18:52:03.829383 kubelet[2403]: I0213 18:52:03.829330 2403 kubelet_node_status.go:76] "Attempting to register node" node="172.31.25.248" Feb 13 18:52:03.856262 kubelet[2403]: I0213 18:52:03.856209 2403 kubelet_node_status.go:79] "Successfully registered node" node="172.31.25.248" Feb 13 18:52:03.856262 kubelet[2403]: E0213 18:52:03.856259 2403 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"172.31.25.248\": node \"172.31.25.248\" not found" Feb 13 18:52:03.889819 kubelet[2403]: E0213 18:52:03.889769 2403 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.25.248\" not found" Feb 13 18:52:03.990555 kubelet[2403]: E0213 18:52:03.990400 2403 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.25.248\" not found" Feb 13 18:52:04.090947 sudo[2266]: pam_unix(sudo:session): session closed for user root Feb 13 18:52:04.091623 kubelet[2403]: E0213 18:52:04.091560 2403 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.25.248\" not found" Feb 13 18:52:04.114650 sshd[2265]: Connection closed by 139.178.68.195 port 40426 Feb 13 18:52:04.115467 sshd-session[2263]: pam_unix(sshd:session): session closed for user core Feb 13 18:52:04.121590 systemd[1]: sshd@6-172.31.25.248:22-139.178.68.195:40426.service: Deactivated successfully. Feb 13 18:52:04.126729 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 18:52:04.129075 systemd-logind[1930]: Session 7 logged out. Waiting for processes to exit. Feb 13 18:52:04.130918 systemd-logind[1930]: Removed session 7. Feb 13 18:52:04.192237 kubelet[2403]: E0213 18:52:04.192187 2403 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.25.248\" not found" Feb 13 18:52:04.293047 kubelet[2403]: E0213 18:52:04.292890 2403 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.25.248\" not found" Feb 13 18:52:04.393712 kubelet[2403]: E0213 18:52:04.393654 2403 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.25.248\" not found" Feb 13 18:52:04.490338 kubelet[2403]: I0213 18:52:04.490283 2403 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 18:52:04.494501 kubelet[2403]: E0213 18:52:04.494457 2403 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.25.248\" not found" Feb 13 18:52:04.565378 kubelet[2403]: E0213 18:52:04.565267 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:04.594750 kubelet[2403]: E0213 18:52:04.594697 2403 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.25.248\" not found" Feb 13 18:52:04.695467 kubelet[2403]: E0213 18:52:04.695409 2403 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.25.248\" not found" Feb 13 18:52:04.795916 kubelet[2403]: E0213 18:52:04.795853 2403 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.25.248\" not found" Feb 13 18:52:04.896827 kubelet[2403]: E0213 18:52:04.896691 2403 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.25.248\" not found" Feb 13 18:52:04.998580 kubelet[2403]: I0213 18:52:04.998348 2403 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 18:52:04.998884 containerd[1954]: time="2025-02-13T18:52:04.998798927Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 18:52:04.999617 kubelet[2403]: I0213 18:52:04.999275 2403 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 18:52:05.565849 kubelet[2403]: E0213 18:52:05.565789 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:05.566431 kubelet[2403]: I0213 18:52:05.565869 2403 apiserver.go:52] "Watching apiserver" Feb 13 18:52:05.579485 kubelet[2403]: E0213 18:52:05.578399 2403 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rknk4" podUID="0446ee6e-94f2-402c-a109-4fa0a50e3591" Feb 13 18:52:05.588043 systemd[1]: Created slice kubepods-besteffort-podf7d05b53_e671_427f_bc57_9629da213bc2.slice - libcontainer container kubepods-besteffort-podf7d05b53_e671_427f_bc57_9629da213bc2.slice. Feb 13 18:52:05.599257 kubelet[2403]: I0213 18:52:05.599201 2403 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 18:52:05.609010 systemd[1]: Created slice kubepods-besteffort-pod25e3af3f_282d_4bb8_9c46_85960cb8d43c.slice - libcontainer container kubepods-besteffort-pod25e3af3f_282d_4bb8_9c46_85960cb8d43c.slice. Feb 13 18:52:05.618785 kubelet[2403]: I0213 18:52:05.617232 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25e3af3f-282d-4bb8-9c46-85960cb8d43c-xtables-lock\") pod \"calico-node-bsnnc\" (UID: \"25e3af3f-282d-4bb8-9c46-85960cb8d43c\") " pod="calico-system/calico-node-bsnnc" Feb 13 18:52:05.618785 kubelet[2403]: I0213 18:52:05.617323 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/25e3af3f-282d-4bb8-9c46-85960cb8d43c-node-certs\") pod \"calico-node-bsnnc\" (UID: \"25e3af3f-282d-4bb8-9c46-85960cb8d43c\") " pod="calico-system/calico-node-bsnnc" Feb 13 18:52:05.618785 kubelet[2403]: I0213 18:52:05.617376 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfvk8\" (UniqueName: \"kubernetes.io/projected/25e3af3f-282d-4bb8-9c46-85960cb8d43c-kube-api-access-pfvk8\") pod \"calico-node-bsnnc\" (UID: \"25e3af3f-282d-4bb8-9c46-85960cb8d43c\") " pod="calico-system/calico-node-bsnnc" Feb 13 18:52:05.618785 kubelet[2403]: I0213 18:52:05.617418 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7d05b53-e671-427f-bc57-9629da213bc2-xtables-lock\") pod \"kube-proxy-nfqbl\" (UID: \"f7d05b53-e671-427f-bc57-9629da213bc2\") " pod="kube-system/kube-proxy-nfqbl" Feb 13 18:52:05.618785 kubelet[2403]: I0213 18:52:05.617467 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25e3af3f-282d-4bb8-9c46-85960cb8d43c-tigera-ca-bundle\") pod \"calico-node-bsnnc\" (UID: \"25e3af3f-282d-4bb8-9c46-85960cb8d43c\") " pod="calico-system/calico-node-bsnnc" Feb 13 18:52:05.619153 kubelet[2403]: I0213 18:52:05.617513 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/25e3af3f-282d-4bb8-9c46-85960cb8d43c-var-run-calico\") pod \"calico-node-bsnnc\" (UID: \"25e3af3f-282d-4bb8-9c46-85960cb8d43c\") " pod="calico-system/calico-node-bsnnc" Feb 13 18:52:05.619153 kubelet[2403]: I0213 18:52:05.617560 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/25e3af3f-282d-4bb8-9c46-85960cb8d43c-cni-net-dir\") pod \"calico-node-bsnnc\" (UID: \"25e3af3f-282d-4bb8-9c46-85960cb8d43c\") " pod="calico-system/calico-node-bsnnc" Feb 13 18:52:05.619153 kubelet[2403]: I0213 18:52:05.617604 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/25e3af3f-282d-4bb8-9c46-85960cb8d43c-flexvol-driver-host\") pod \"calico-node-bsnnc\" (UID: \"25e3af3f-282d-4bb8-9c46-85960cb8d43c\") " pod="calico-system/calico-node-bsnnc" Feb 13 18:52:05.619153 kubelet[2403]: I0213 18:52:05.617652 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0446ee6e-94f2-402c-a109-4fa0a50e3591-socket-dir\") pod \"csi-node-driver-rknk4\" (UID: \"0446ee6e-94f2-402c-a109-4fa0a50e3591\") " pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:05.619153 kubelet[2403]: I0213 18:52:05.617700 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgf6m\" (UniqueName: \"kubernetes.io/projected/0446ee6e-94f2-402c-a109-4fa0a50e3591-kube-api-access-wgf6m\") pod \"csi-node-driver-rknk4\" (UID: \"0446ee6e-94f2-402c-a109-4fa0a50e3591\") " pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:05.619389 kubelet[2403]: I0213 18:52:05.617747 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7d05b53-e671-427f-bc57-9629da213bc2-lib-modules\") pod \"kube-proxy-nfqbl\" (UID: \"f7d05b53-e671-427f-bc57-9629da213bc2\") " pod="kube-system/kube-proxy-nfqbl" Feb 13 18:52:05.619389 kubelet[2403]: I0213 18:52:05.617793 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25e3af3f-282d-4bb8-9c46-85960cb8d43c-lib-modules\") pod \"calico-node-bsnnc\" (UID: \"25e3af3f-282d-4bb8-9c46-85960cb8d43c\") " pod="calico-system/calico-node-bsnnc" Feb 13 18:52:05.619389 kubelet[2403]: I0213 18:52:05.617839 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/25e3af3f-282d-4bb8-9c46-85960cb8d43c-var-lib-calico\") pod \"calico-node-bsnnc\" (UID: \"25e3af3f-282d-4bb8-9c46-85960cb8d43c\") " pod="calico-system/calico-node-bsnnc" Feb 13 18:52:05.619389 kubelet[2403]: I0213 18:52:05.617891 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0446ee6e-94f2-402c-a109-4fa0a50e3591-kubelet-dir\") pod \"csi-node-driver-rknk4\" (UID: \"0446ee6e-94f2-402c-a109-4fa0a50e3591\") " pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:05.619389 kubelet[2403]: I0213 18:52:05.617941 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh5jm\" (UniqueName: \"kubernetes.io/projected/f7d05b53-e671-427f-bc57-9629da213bc2-kube-api-access-kh5jm\") pod \"kube-proxy-nfqbl\" (UID: \"f7d05b53-e671-427f-bc57-9629da213bc2\") " pod="kube-system/kube-proxy-nfqbl" Feb 13 18:52:05.619609 kubelet[2403]: I0213 18:52:05.617988 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/25e3af3f-282d-4bb8-9c46-85960cb8d43c-policysync\") pod \"calico-node-bsnnc\" (UID: \"25e3af3f-282d-4bb8-9c46-85960cb8d43c\") " pod="calico-system/calico-node-bsnnc" Feb 13 18:52:05.619609 kubelet[2403]: I0213 18:52:05.618038 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/25e3af3f-282d-4bb8-9c46-85960cb8d43c-cni-bin-dir\") pod \"calico-node-bsnnc\" (UID: \"25e3af3f-282d-4bb8-9c46-85960cb8d43c\") " pod="calico-system/calico-node-bsnnc" Feb 13 18:52:05.619609 kubelet[2403]: I0213 18:52:05.618111 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/25e3af3f-282d-4bb8-9c46-85960cb8d43c-cni-log-dir\") pod \"calico-node-bsnnc\" (UID: \"25e3af3f-282d-4bb8-9c46-85960cb8d43c\") " pod="calico-system/calico-node-bsnnc" Feb 13 18:52:05.619609 kubelet[2403]: I0213 18:52:05.618160 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0446ee6e-94f2-402c-a109-4fa0a50e3591-varrun\") pod \"csi-node-driver-rknk4\" (UID: \"0446ee6e-94f2-402c-a109-4fa0a50e3591\") " pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:05.619609 kubelet[2403]: I0213 18:52:05.618209 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0446ee6e-94f2-402c-a109-4fa0a50e3591-registration-dir\") pod \"csi-node-driver-rknk4\" (UID: \"0446ee6e-94f2-402c-a109-4fa0a50e3591\") " pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:05.619827 kubelet[2403]: I0213 18:52:05.618257 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f7d05b53-e671-427f-bc57-9629da213bc2-kube-proxy\") pod \"kube-proxy-nfqbl\" (UID: \"f7d05b53-e671-427f-bc57-9629da213bc2\") " pod="kube-system/kube-proxy-nfqbl" Feb 13 18:52:05.724037 kubelet[2403]: E0213 18:52:05.723991 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.724037 kubelet[2403]: W0213 18:52:05.724025 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.724281 kubelet[2403]: E0213 18:52:05.724081 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.724539 kubelet[2403]: E0213 18:52:05.724428 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.724539 kubelet[2403]: W0213 18:52:05.724459 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.724539 kubelet[2403]: E0213 18:52:05.724498 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.725945 kubelet[2403]: E0213 18:52:05.725832 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.725945 kubelet[2403]: W0213 18:52:05.725861 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.726578 kubelet[2403]: E0213 18:52:05.725896 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.727836 kubelet[2403]: E0213 18:52:05.727493 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.727836 kubelet[2403]: W0213 18:52:05.727579 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.727836 kubelet[2403]: E0213 18:52:05.727682 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.728656 kubelet[2403]: E0213 18:52:05.728434 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.728656 kubelet[2403]: W0213 18:52:05.728458 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.728656 kubelet[2403]: E0213 18:52:05.728616 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.729165 kubelet[2403]: E0213 18:52:05.729005 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.729165 kubelet[2403]: W0213 18:52:05.729026 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.729165 kubelet[2403]: E0213 18:52:05.729085 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.729856 kubelet[2403]: E0213 18:52:05.729677 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.729856 kubelet[2403]: W0213 18:52:05.729696 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.729856 kubelet[2403]: E0213 18:52:05.729739 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.730426 kubelet[2403]: E0213 18:52:05.730293 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.730426 kubelet[2403]: W0213 18:52:05.730317 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.730426 kubelet[2403]: E0213 18:52:05.730364 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.731026 kubelet[2403]: E0213 18:52:05.730904 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.731026 kubelet[2403]: W0213 18:52:05.730926 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.731026 kubelet[2403]: E0213 18:52:05.730969 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.731717 kubelet[2403]: E0213 18:52:05.731491 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.731717 kubelet[2403]: W0213 18:52:05.731523 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.731717 kubelet[2403]: E0213 18:52:05.731569 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.732010 kubelet[2403]: E0213 18:52:05.731991 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.732220 kubelet[2403]: W0213 18:52:05.732145 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.732220 kubelet[2403]: E0213 18:52:05.732193 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.732892 kubelet[2403]: E0213 18:52:05.732719 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.732892 kubelet[2403]: W0213 18:52:05.732740 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.732892 kubelet[2403]: E0213 18:52:05.732785 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.733308 kubelet[2403]: E0213 18:52:05.733205 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.733308 kubelet[2403]: W0213 18:52:05.733226 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.733308 kubelet[2403]: E0213 18:52:05.733269 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.733889 kubelet[2403]: E0213 18:52:05.733759 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.733889 kubelet[2403]: W0213 18:52:05.733779 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.733889 kubelet[2403]: E0213 18:52:05.733819 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.734556 kubelet[2403]: E0213 18:52:05.734389 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.734556 kubelet[2403]: W0213 18:52:05.734410 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.734556 kubelet[2403]: E0213 18:52:05.734452 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.734964 kubelet[2403]: E0213 18:52:05.734806 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.734964 kubelet[2403]: W0213 18:52:05.734822 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.734964 kubelet[2403]: E0213 18:52:05.734862 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.735406 kubelet[2403]: E0213 18:52:05.735387 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.735568 kubelet[2403]: W0213 18:52:05.735483 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.735568 kubelet[2403]: E0213 18:52:05.735529 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.736128 kubelet[2403]: E0213 18:52:05.735933 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.736128 kubelet[2403]: W0213 18:52:05.735953 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.736128 kubelet[2403]: E0213 18:52:05.735992 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.736502 kubelet[2403]: E0213 18:52:05.736404 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.736502 kubelet[2403]: W0213 18:52:05.736424 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.736502 kubelet[2403]: E0213 18:52:05.736464 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.737076 kubelet[2403]: E0213 18:52:05.736954 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.737076 kubelet[2403]: W0213 18:52:05.736979 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.737076 kubelet[2403]: E0213 18:52:05.737021 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.737572 kubelet[2403]: E0213 18:52:05.737474 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.737572 kubelet[2403]: W0213 18:52:05.737494 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.737572 kubelet[2403]: E0213 18:52:05.737535 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.738149 kubelet[2403]: E0213 18:52:05.737972 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.738149 kubelet[2403]: W0213 18:52:05.737991 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.738149 kubelet[2403]: E0213 18:52:05.738030 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.738616 kubelet[2403]: E0213 18:52:05.738516 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.738616 kubelet[2403]: W0213 18:52:05.738537 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.738616 kubelet[2403]: E0213 18:52:05.738576 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.739202 kubelet[2403]: E0213 18:52:05.739043 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.739202 kubelet[2403]: W0213 18:52:05.739062 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.739202 kubelet[2403]: E0213 18:52:05.739143 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.739839 kubelet[2403]: E0213 18:52:05.739640 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.739839 kubelet[2403]: W0213 18:52:05.739661 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.739839 kubelet[2403]: E0213 18:52:05.739702 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.740302 kubelet[2403]: E0213 18:52:05.740168 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.740302 kubelet[2403]: W0213 18:52:05.740190 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.740302 kubelet[2403]: E0213 18:52:05.740231 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.740902 kubelet[2403]: E0213 18:52:05.740691 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.740902 kubelet[2403]: W0213 18:52:05.740710 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.740902 kubelet[2403]: E0213 18:52:05.740749 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.741385 kubelet[2403]: E0213 18:52:05.741245 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.741385 kubelet[2403]: W0213 18:52:05.741268 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.741385 kubelet[2403]: E0213 18:52:05.741309 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.741933 kubelet[2403]: E0213 18:52:05.741806 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.741933 kubelet[2403]: W0213 18:52:05.741826 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.741933 kubelet[2403]: E0213 18:52:05.741867 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.742403 kubelet[2403]: E0213 18:52:05.742376 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.742512 kubelet[2403]: W0213 18:52:05.742402 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.742512 kubelet[2403]: E0213 18:52:05.742426 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.753030 kubelet[2403]: E0213 18:52:05.752995 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.753336 kubelet[2403]: W0213 18:52:05.753222 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.753336 kubelet[2403]: E0213 18:52:05.753265 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.786241 kubelet[2403]: E0213 18:52:05.785303 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.786241 kubelet[2403]: W0213 18:52:05.785339 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.786241 kubelet[2403]: E0213 18:52:05.785896 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.786241 kubelet[2403]: W0213 18:52:05.785921 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.786241 kubelet[2403]: E0213 18:52:05.785949 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.786241 kubelet[2403]: E0213 18:52:05.786165 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.787009 kubelet[2403]: E0213 18:52:05.786885 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:05.787009 kubelet[2403]: W0213 18:52:05.786912 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:05.787009 kubelet[2403]: E0213 18:52:05.786938 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:05.903047 containerd[1954]: time="2025-02-13T18:52:05.902922589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nfqbl,Uid:f7d05b53-e671-427f-bc57-9629da213bc2,Namespace:kube-system,Attempt:0,}" Feb 13 18:52:05.925703 containerd[1954]: time="2025-02-13T18:52:05.925592317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bsnnc,Uid:25e3af3f-282d-4bb8-9c46-85960cb8d43c,Namespace:calico-system,Attempt:0,}" Feb 13 18:52:06.501494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount92705549.mount: Deactivated successfully. Feb 13 18:52:06.513174 containerd[1954]: time="2025-02-13T18:52:06.511773797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:52:06.515535 containerd[1954]: time="2025-02-13T18:52:06.515458738Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 18:52:06.519920 containerd[1954]: time="2025-02-13T18:52:06.519847834Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:52:06.522917 containerd[1954]: time="2025-02-13T18:52:06.522755683Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:52:06.523724 containerd[1954]: time="2025-02-13T18:52:06.523635827Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 18:52:06.529413 containerd[1954]: time="2025-02-13T18:52:06.529299142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:52:06.532568 containerd[1954]: time="2025-02-13T18:52:06.531084041Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 627.641248ms" Feb 13 18:52:06.535901 containerd[1954]: time="2025-02-13T18:52:06.535843836Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 610.146225ms" Feb 13 18:52:06.566658 kubelet[2403]: E0213 18:52:06.566609 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:06.718642 containerd[1954]: time="2025-02-13T18:52:06.718073033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:52:06.718642 containerd[1954]: time="2025-02-13T18:52:06.718228751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:52:06.718642 containerd[1954]: time="2025-02-13T18:52:06.718267863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:52:06.718642 containerd[1954]: time="2025-02-13T18:52:06.718437554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:52:06.819539 containerd[1954]: time="2025-02-13T18:52:06.807628537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:52:06.819539 containerd[1954]: time="2025-02-13T18:52:06.807722258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:52:06.819539 containerd[1954]: time="2025-02-13T18:52:06.807760483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:52:06.819539 containerd[1954]: time="2025-02-13T18:52:06.807916033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:52:06.944461 systemd[1]: Started cri-containerd-1d6f6d418f11f8ae9787cc52bde685619fcf0c3ff953eeab6954e022c5dfb09f.scope - libcontainer container 1d6f6d418f11f8ae9787cc52bde685619fcf0c3ff953eeab6954e022c5dfb09f. Feb 13 18:52:06.948163 systemd[1]: Started cri-containerd-a2501e3a7e69b1397188d4754582041cae1bf7cfdca9f418960e540c021be812.scope - libcontainer container a2501e3a7e69b1397188d4754582041cae1bf7cfdca9f418960e540c021be812. Feb 13 18:52:07.034356 containerd[1954]: time="2025-02-13T18:52:07.033567060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nfqbl,Uid:f7d05b53-e671-427f-bc57-9629da213bc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2501e3a7e69b1397188d4754582041cae1bf7cfdca9f418960e540c021be812\"" Feb 13 18:52:07.040979 containerd[1954]: time="2025-02-13T18:52:07.040665900Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 18:52:07.046979 containerd[1954]: time="2025-02-13T18:52:07.046800220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bsnnc,Uid:25e3af3f-282d-4bb8-9c46-85960cb8d43c,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d6f6d418f11f8ae9787cc52bde685619fcf0c3ff953eeab6954e022c5dfb09f\"" Feb 13 18:52:07.568175 kubelet[2403]: E0213 18:52:07.568117 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:07.717645 kubelet[2403]: E0213 18:52:07.717157 2403 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rknk4" podUID="0446ee6e-94f2-402c-a109-4fa0a50e3591" Feb 13 18:52:08.381046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3070625502.mount: Deactivated successfully. Feb 13 18:52:08.568873 kubelet[2403]: E0213 18:52:08.568808 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:08.952538 containerd[1954]: time="2025-02-13T18:52:08.952457404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:08.953916 containerd[1954]: time="2025-02-13T18:52:08.953847089Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363382" Feb 13 18:52:08.956280 containerd[1954]: time="2025-02-13T18:52:08.956179766Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:08.961944 containerd[1954]: time="2025-02-13T18:52:08.961838595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:08.963423 containerd[1954]: time="2025-02-13T18:52:08.963147249Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.922419459s" Feb 13 18:52:08.963423 containerd[1954]: time="2025-02-13T18:52:08.963202433Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 18:52:08.966492 containerd[1954]: time="2025-02-13T18:52:08.966349431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 18:52:08.968459 containerd[1954]: time="2025-02-13T18:52:08.968160824Z" level=info msg="CreateContainer within sandbox \"a2501e3a7e69b1397188d4754582041cae1bf7cfdca9f418960e540c021be812\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 18:52:08.990433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4226081477.mount: Deactivated successfully. Feb 13 18:52:08.997923 containerd[1954]: time="2025-02-13T18:52:08.997749963Z" level=info msg="CreateContainer within sandbox \"a2501e3a7e69b1397188d4754582041cae1bf7cfdca9f418960e540c021be812\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9ee5a51e8625a833e0992d41b15733adcd00c2dbcc91ec4b07821c01b93d74ed\"" Feb 13 18:52:08.999528 containerd[1954]: time="2025-02-13T18:52:08.999475923Z" level=info msg="StartContainer for \"9ee5a51e8625a833e0992d41b15733adcd00c2dbcc91ec4b07821c01b93d74ed\"" Feb 13 18:52:09.054420 systemd[1]: Started cri-containerd-9ee5a51e8625a833e0992d41b15733adcd00c2dbcc91ec4b07821c01b93d74ed.scope - libcontainer container 9ee5a51e8625a833e0992d41b15733adcd00c2dbcc91ec4b07821c01b93d74ed. Feb 13 18:52:09.114264 containerd[1954]: time="2025-02-13T18:52:09.114184045Z" level=info msg="StartContainer for \"9ee5a51e8625a833e0992d41b15733adcd00c2dbcc91ec4b07821c01b93d74ed\" returns successfully" Feb 13 18:52:09.569210 kubelet[2403]: E0213 18:52:09.568937 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:09.718241 kubelet[2403]: E0213 18:52:09.718073 2403 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rknk4" podUID="0446ee6e-94f2-402c-a109-4fa0a50e3591" Feb 13 18:52:09.870627 kubelet[2403]: I0213 18:52:09.870409 2403 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nfqbl" podStartSLOduration=4.944711171 podStartE2EDuration="6.870382167s" podCreationTimestamp="2025-02-13 18:52:03 +0000 UTC" firstStartedPulling="2025-02-13 18:52:07.039531748 +0000 UTC m=+5.095242881" lastFinishedPulling="2025-02-13 18:52:08.965202672 +0000 UTC m=+7.020913877" observedRunningTime="2025-02-13 18:52:09.87020011 +0000 UTC m=+7.925911267" watchObservedRunningTime="2025-02-13 18:52:09.870382167 +0000 UTC m=+7.926093300" Feb 13 18:52:09.915908 kubelet[2403]: E0213 18:52:09.915865 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.916327 kubelet[2403]: W0213 18:52:09.916142 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.916327 kubelet[2403]: E0213 18:52:09.916186 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.916855 kubelet[2403]: E0213 18:52:09.916731 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.916855 kubelet[2403]: W0213 18:52:09.916778 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.917162 kubelet[2403]: E0213 18:52:09.917015 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.917618 kubelet[2403]: E0213 18:52:09.917593 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.917907 kubelet[2403]: W0213 18:52:09.917740 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.917907 kubelet[2403]: E0213 18:52:09.917773 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.918753 kubelet[2403]: E0213 18:52:09.918495 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.918753 kubelet[2403]: W0213 18:52:09.918520 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.918753 kubelet[2403]: E0213 18:52:09.918546 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.919324 kubelet[2403]: E0213 18:52:09.919081 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.919324 kubelet[2403]: W0213 18:52:09.919141 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.919324 kubelet[2403]: E0213 18:52:09.919167 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.919618 kubelet[2403]: E0213 18:52:09.919594 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.919736 kubelet[2403]: W0213 18:52:09.919712 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.919846 kubelet[2403]: E0213 18:52:09.919822 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.920822 kubelet[2403]: E0213 18:52:09.920782 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.920822 kubelet[2403]: W0213 18:52:09.920816 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.921018 kubelet[2403]: E0213 18:52:09.920848 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.922009 kubelet[2403]: E0213 18:52:09.921952 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.922009 kubelet[2403]: W0213 18:52:09.921992 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.922265 kubelet[2403]: E0213 18:52:09.922025 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.923067 kubelet[2403]: E0213 18:52:09.923019 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.923067 kubelet[2403]: W0213 18:52:09.923058 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.923265 kubelet[2403]: E0213 18:52:09.923111 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.923543 kubelet[2403]: E0213 18:52:09.923499 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.923543 kubelet[2403]: W0213 18:52:09.923532 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.923674 kubelet[2403]: E0213 18:52:09.923556 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.923922 kubelet[2403]: E0213 18:52:09.923881 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.923996 kubelet[2403]: W0213 18:52:09.923911 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.923996 kubelet[2403]: E0213 18:52:09.923961 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.925714 kubelet[2403]: E0213 18:52:09.925479 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.925714 kubelet[2403]: W0213 18:52:09.925512 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.925714 kubelet[2403]: E0213 18:52:09.925545 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.926160 kubelet[2403]: E0213 18:52:09.926079 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.926259 kubelet[2403]: W0213 18:52:09.926159 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.926259 kubelet[2403]: E0213 18:52:09.926190 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.926777 kubelet[2403]: E0213 18:52:09.926723 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.926777 kubelet[2403]: W0213 18:52:09.926762 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.926966 kubelet[2403]: E0213 18:52:09.926807 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.927312 kubelet[2403]: E0213 18:52:09.927270 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.927312 kubelet[2403]: W0213 18:52:09.927302 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.927448 kubelet[2403]: E0213 18:52:09.927328 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.927694 kubelet[2403]: E0213 18:52:09.927652 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.927694 kubelet[2403]: W0213 18:52:09.927682 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.927845 kubelet[2403]: E0213 18:52:09.927708 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.928082 kubelet[2403]: E0213 18:52:09.928054 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.928183 kubelet[2403]: W0213 18:52:09.928080 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.928183 kubelet[2403]: E0213 18:52:09.928159 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.928496 kubelet[2403]: E0213 18:52:09.928470 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.928580 kubelet[2403]: W0213 18:52:09.928496 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.928580 kubelet[2403]: E0213 18:52:09.928518 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.928840 kubelet[2403]: E0213 18:52:09.928813 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.928901 kubelet[2403]: W0213 18:52:09.928839 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.928901 kubelet[2403]: E0213 18:52:09.928861 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.929194 kubelet[2403]: E0213 18:52:09.929167 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.929281 kubelet[2403]: W0213 18:52:09.929192 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.929281 kubelet[2403]: E0213 18:52:09.929213 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.929678 kubelet[2403]: E0213 18:52:09.929649 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.929781 kubelet[2403]: W0213 18:52:09.929676 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.929781 kubelet[2403]: E0213 18:52:09.929701 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.930069 kubelet[2403]: E0213 18:52:09.930042 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.930195 kubelet[2403]: W0213 18:52:09.930067 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.930195 kubelet[2403]: E0213 18:52:09.930149 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.930616 kubelet[2403]: E0213 18:52:09.930567 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.930616 kubelet[2403]: W0213 18:52:09.930603 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.930759 kubelet[2403]: E0213 18:52:09.930642 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.931017 kubelet[2403]: E0213 18:52:09.930986 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.931177 kubelet[2403]: W0213 18:52:09.931016 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.931177 kubelet[2403]: E0213 18:52:09.931061 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.931499 kubelet[2403]: E0213 18:52:09.931470 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.931563 kubelet[2403]: W0213 18:52:09.931498 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.931563 kubelet[2403]: E0213 18:52:09.931530 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.931948 kubelet[2403]: E0213 18:52:09.931917 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.932061 kubelet[2403]: W0213 18:52:09.931946 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.932576 kubelet[2403]: E0213 18:52:09.932193 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.932576 kubelet[2403]: E0213 18:52:09.932310 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.932576 kubelet[2403]: W0213 18:52:09.932326 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.932576 kubelet[2403]: E0213 18:52:09.932347 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.932833 kubelet[2403]: E0213 18:52:09.932640 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.932833 kubelet[2403]: W0213 18:52:09.932658 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.932833 kubelet[2403]: E0213 18:52:09.932687 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.933062 kubelet[2403]: E0213 18:52:09.933035 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.933193 kubelet[2403]: W0213 18:52:09.933061 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.933249 kubelet[2403]: E0213 18:52:09.933209 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.933623 kubelet[2403]: E0213 18:52:09.933590 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.933623 kubelet[2403]: W0213 18:52:09.933620 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.933773 kubelet[2403]: E0213 18:52:09.933657 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.934426 kubelet[2403]: E0213 18:52:09.934328 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.934426 kubelet[2403]: W0213 18:52:09.934360 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.934426 kubelet[2403]: E0213 18:52:09.934402 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:09.934848 kubelet[2403]: E0213 18:52:09.934812 2403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 18:52:09.934997 kubelet[2403]: W0213 18:52:09.934847 2403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 18:52:09.934997 kubelet[2403]: E0213 18:52:09.934878 2403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 18:52:10.252848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2071605855.mount: Deactivated successfully. Feb 13 18:52:10.387553 containerd[1954]: time="2025-02-13T18:52:10.387466813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:10.389133 containerd[1954]: time="2025-02-13T18:52:10.389035388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Feb 13 18:52:10.390083 containerd[1954]: time="2025-02-13T18:52:10.389975310Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:10.394124 containerd[1954]: time="2025-02-13T18:52:10.394036622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:10.396110 containerd[1954]: time="2025-02-13T18:52:10.395809803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.429402057s" Feb 13 18:52:10.396110 containerd[1954]: time="2025-02-13T18:52:10.395886121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 18:52:10.401202 containerd[1954]: time="2025-02-13T18:52:10.401042905Z" level=info msg="CreateContainer within sandbox \"1d6f6d418f11f8ae9787cc52bde685619fcf0c3ff953eeab6954e022c5dfb09f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 18:52:10.427408 containerd[1954]: time="2025-02-13T18:52:10.427321436Z" level=info msg="CreateContainer within sandbox \"1d6f6d418f11f8ae9787cc52bde685619fcf0c3ff953eeab6954e022c5dfb09f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f8d4ff70f1f0420a0410cb1cbf60c371eaed78266dff249cb28ebbc6cc69ea86\"" Feb 13 18:52:10.428472 containerd[1954]: time="2025-02-13T18:52:10.428392584Z" level=info msg="StartContainer for \"f8d4ff70f1f0420a0410cb1cbf60c371eaed78266dff249cb28ebbc6cc69ea86\"" Feb 13 18:52:10.487274 systemd[1]: Started cri-containerd-f8d4ff70f1f0420a0410cb1cbf60c371eaed78266dff249cb28ebbc6cc69ea86.scope - libcontainer container f8d4ff70f1f0420a0410cb1cbf60c371eaed78266dff249cb28ebbc6cc69ea86. Feb 13 18:52:10.553677 containerd[1954]: time="2025-02-13T18:52:10.553528649Z" level=info msg="StartContainer for \"f8d4ff70f1f0420a0410cb1cbf60c371eaed78266dff249cb28ebbc6cc69ea86\" returns successfully" Feb 13 18:52:10.569216 kubelet[2403]: E0213 18:52:10.569150 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:10.578369 systemd[1]: cri-containerd-f8d4ff70f1f0420a0410cb1cbf60c371eaed78266dff249cb28ebbc6cc69ea86.scope: Deactivated successfully. Feb 13 18:52:10.619059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8d4ff70f1f0420a0410cb1cbf60c371eaed78266dff249cb28ebbc6cc69ea86-rootfs.mount: Deactivated successfully. Feb 13 18:52:10.865137 containerd[1954]: time="2025-02-13T18:52:10.864766461Z" level=info msg="shim disconnected" id=f8d4ff70f1f0420a0410cb1cbf60c371eaed78266dff249cb28ebbc6cc69ea86 namespace=k8s.io Feb 13 18:52:10.865137 containerd[1954]: time="2025-02-13T18:52:10.864849519Z" level=warning msg="cleaning up after shim disconnected" id=f8d4ff70f1f0420a0410cb1cbf60c371eaed78266dff249cb28ebbc6cc69ea86 namespace=k8s.io Feb 13 18:52:10.865137 containerd[1954]: time="2025-02-13T18:52:10.864872356Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 18:52:11.570172 kubelet[2403]: E0213 18:52:11.570068 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:11.716496 kubelet[2403]: E0213 18:52:11.715987 2403 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rknk4" podUID="0446ee6e-94f2-402c-a109-4fa0a50e3591" Feb 13 18:52:11.853345 containerd[1954]: time="2025-02-13T18:52:11.852908646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 18:52:12.570842 kubelet[2403]: E0213 18:52:12.570774 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:13.571365 kubelet[2403]: E0213 18:52:13.571280 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:13.717846 kubelet[2403]: E0213 18:52:13.716671 2403 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rknk4" podUID="0446ee6e-94f2-402c-a109-4fa0a50e3591" Feb 13 18:52:14.572163 kubelet[2403]: E0213 18:52:14.572106 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:15.380061 containerd[1954]: time="2025-02-13T18:52:15.379971187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:15.381521 containerd[1954]: time="2025-02-13T18:52:15.381153087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 18:52:15.383539 containerd[1954]: time="2025-02-13T18:52:15.383410334Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:15.387441 containerd[1954]: time="2025-02-13T18:52:15.387362153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:15.389146 containerd[1954]: time="2025-02-13T18:52:15.388913193Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.535944398s" Feb 13 18:52:15.389146 containerd[1954]: time="2025-02-13T18:52:15.388966974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 18:52:15.393198 containerd[1954]: time="2025-02-13T18:52:15.393021666Z" level=info msg="CreateContainer within sandbox \"1d6f6d418f11f8ae9787cc52bde685619fcf0c3ff953eeab6954e022c5dfb09f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 18:52:15.415942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4043361971.mount: Deactivated successfully. Feb 13 18:52:15.418975 containerd[1954]: time="2025-02-13T18:52:15.418643369Z" level=info msg="CreateContainer within sandbox \"1d6f6d418f11f8ae9787cc52bde685619fcf0c3ff953eeab6954e022c5dfb09f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"80b8ab11c4604fdccdf76e8aad6dce3d5c9c7127708db2c997b83bfb8d08f7cf\"" Feb 13 18:52:15.419644 containerd[1954]: time="2025-02-13T18:52:15.419581792Z" level=info msg="StartContainer for \"80b8ab11c4604fdccdf76e8aad6dce3d5c9c7127708db2c997b83bfb8d08f7cf\"" Feb 13 18:52:15.469423 systemd[1]: Started cri-containerd-80b8ab11c4604fdccdf76e8aad6dce3d5c9c7127708db2c997b83bfb8d08f7cf.scope - libcontainer container 80b8ab11c4604fdccdf76e8aad6dce3d5c9c7127708db2c997b83bfb8d08f7cf. Feb 13 18:52:15.525227 containerd[1954]: time="2025-02-13T18:52:15.524952632Z" level=info msg="StartContainer for \"80b8ab11c4604fdccdf76e8aad6dce3d5c9c7127708db2c997b83bfb8d08f7cf\" returns successfully" Feb 13 18:52:15.574130 kubelet[2403]: E0213 18:52:15.573555 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:15.720264 kubelet[2403]: E0213 18:52:15.720194 2403 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rknk4" podUID="0446ee6e-94f2-402c-a109-4fa0a50e3591" Feb 13 18:52:16.541086 containerd[1954]: time="2025-02-13T18:52:16.540994660Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 18:52:16.546891 systemd[1]: cri-containerd-80b8ab11c4604fdccdf76e8aad6dce3d5c9c7127708db2c997b83bfb8d08f7cf.scope: Deactivated successfully. Feb 13 18:52:16.574469 kubelet[2403]: E0213 18:52:16.574386 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:16.600981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80b8ab11c4604fdccdf76e8aad6dce3d5c9c7127708db2c997b83bfb8d08f7cf-rootfs.mount: Deactivated successfully. Feb 13 18:52:16.607468 kubelet[2403]: I0213 18:52:16.607406 2403 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 18:52:17.348579 containerd[1954]: time="2025-02-13T18:52:17.348243480Z" level=info msg="shim disconnected" id=80b8ab11c4604fdccdf76e8aad6dce3d5c9c7127708db2c997b83bfb8d08f7cf namespace=k8s.io Feb 13 18:52:17.348579 containerd[1954]: time="2025-02-13T18:52:17.348343666Z" level=warning msg="cleaning up after shim disconnected" id=80b8ab11c4604fdccdf76e8aad6dce3d5c9c7127708db2c997b83bfb8d08f7cf namespace=k8s.io Feb 13 18:52:17.348579 containerd[1954]: time="2025-02-13T18:52:17.348364236Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 18:52:17.575465 kubelet[2403]: E0213 18:52:17.575388 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:17.727980 systemd[1]: Created slice kubepods-besteffort-pod0446ee6e_94f2_402c_a109_4fa0a50e3591.slice - libcontainer container kubepods-besteffort-pod0446ee6e_94f2_402c_a109_4fa0a50e3591.slice. Feb 13 18:52:17.732186 containerd[1954]: time="2025-02-13T18:52:17.732085940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rknk4,Uid:0446ee6e-94f2-402c-a109-4fa0a50e3591,Namespace:calico-system,Attempt:0,}" Feb 13 18:52:17.849969 containerd[1954]: time="2025-02-13T18:52:17.849405323Z" level=error msg="Failed to destroy network for sandbox \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:17.852568 containerd[1954]: time="2025-02-13T18:52:17.852444435Z" level=error msg="encountered an error cleaning up failed sandbox \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:17.853605 containerd[1954]: time="2025-02-13T18:52:17.852561580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rknk4,Uid:0446ee6e-94f2-402c-a109-4fa0a50e3591,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:17.853693 kubelet[2403]: E0213 18:52:17.852887 2403 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:17.853693 kubelet[2403]: E0213 18:52:17.852978 2403 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:17.853693 kubelet[2403]: E0213 18:52:17.853012 2403 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:17.853870 kubelet[2403]: E0213 18:52:17.853080 2403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rknk4_calico-system(0446ee6e-94f2-402c-a109-4fa0a50e3591)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rknk4_calico-system(0446ee6e-94f2-402c-a109-4fa0a50e3591)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rknk4" podUID="0446ee6e-94f2-402c-a109-4fa0a50e3591" Feb 13 18:52:17.854226 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9-shm.mount: Deactivated successfully. Feb 13 18:52:17.877027 containerd[1954]: time="2025-02-13T18:52:17.876722426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 18:52:17.877560 kubelet[2403]: I0213 18:52:17.877515 2403 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9" Feb 13 18:52:17.878667 containerd[1954]: time="2025-02-13T18:52:17.878457454Z" level=info msg="StopPodSandbox for \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\"" Feb 13 18:52:17.879942 containerd[1954]: time="2025-02-13T18:52:17.879854671Z" level=info msg="Ensure that sandbox 624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9 in task-service has been cleanup successfully" Feb 13 18:52:17.880652 containerd[1954]: time="2025-02-13T18:52:17.880418785Z" level=info msg="TearDown network for sandbox \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\" successfully" Feb 13 18:52:17.880744 containerd[1954]: time="2025-02-13T18:52:17.880461987Z" level=info msg="StopPodSandbox for \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\" returns successfully" Feb 13 18:52:17.884822 systemd[1]: run-netns-cni\x2db9a625a3\x2d2c12\x2dbf8f\x2db355\x2d685be13002df.mount: Deactivated successfully. Feb 13 18:52:17.889547 containerd[1954]: time="2025-02-13T18:52:17.889482531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rknk4,Uid:0446ee6e-94f2-402c-a109-4fa0a50e3591,Namespace:calico-system,Attempt:1,}" Feb 13 18:52:17.991863 containerd[1954]: time="2025-02-13T18:52:17.991654056Z" level=error msg="Failed to destroy network for sandbox \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:17.993417 containerd[1954]: time="2025-02-13T18:52:17.992267633Z" level=error msg="encountered an error cleaning up failed sandbox \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:17.993417 containerd[1954]: time="2025-02-13T18:52:17.992358331Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rknk4,Uid:0446ee6e-94f2-402c-a109-4fa0a50e3591,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:17.993582 kubelet[2403]: E0213 18:52:17.992775 2403 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:17.993582 kubelet[2403]: E0213 18:52:17.992860 2403 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:17.993582 kubelet[2403]: E0213 18:52:17.992898 2403 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:17.993767 kubelet[2403]: E0213 18:52:17.992966 2403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rknk4_calico-system(0446ee6e-94f2-402c-a109-4fa0a50e3591)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rknk4_calico-system(0446ee6e-94f2-402c-a109-4fa0a50e3591)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rknk4" podUID="0446ee6e-94f2-402c-a109-4fa0a50e3591" Feb 13 18:52:18.576448 kubelet[2403]: E0213 18:52:18.576375 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:18.748081 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488-shm.mount: Deactivated successfully. Feb 13 18:52:18.881503 kubelet[2403]: I0213 18:52:18.881350 2403 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488" Feb 13 18:52:18.883176 containerd[1954]: time="2025-02-13T18:52:18.882702783Z" level=info msg="StopPodSandbox for \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\"" Feb 13 18:52:18.883176 containerd[1954]: time="2025-02-13T18:52:18.883003833Z" level=info msg="Ensure that sandbox 26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488 in task-service has been cleanup successfully" Feb 13 18:52:18.886387 containerd[1954]: time="2025-02-13T18:52:18.886166542Z" level=info msg="TearDown network for sandbox \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\" successfully" Feb 13 18:52:18.886387 containerd[1954]: time="2025-02-13T18:52:18.886237751Z" level=info msg="StopPodSandbox for \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\" returns successfully" Feb 13 18:52:18.886255 systemd[1]: run-netns-cni\x2d26e781b0\x2d7000\x2df623\x2d6ff5\x2d6398bf16db53.mount: Deactivated successfully. Feb 13 18:52:18.888170 containerd[1954]: time="2025-02-13T18:52:18.887783537Z" level=info msg="StopPodSandbox for \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\"" Feb 13 18:52:18.888170 containerd[1954]: time="2025-02-13T18:52:18.887965187Z" level=info msg="TearDown network for sandbox \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\" successfully" Feb 13 18:52:18.888170 containerd[1954]: time="2025-02-13T18:52:18.887989558Z" level=info msg="StopPodSandbox for \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\" returns successfully" Feb 13 18:52:18.889938 containerd[1954]: time="2025-02-13T18:52:18.889482632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rknk4,Uid:0446ee6e-94f2-402c-a109-4fa0a50e3591,Namespace:calico-system,Attempt:2,}" Feb 13 18:52:18.996001 containerd[1954]: time="2025-02-13T18:52:18.995903271Z" level=error msg="Failed to destroy network for sandbox \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:18.999232 containerd[1954]: time="2025-02-13T18:52:18.999042604Z" level=error msg="encountered an error cleaning up failed sandbox \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:18.999345 containerd[1954]: time="2025-02-13T18:52:18.999292020Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rknk4,Uid:0446ee6e-94f2-402c-a109-4fa0a50e3591,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:18.999813 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837-shm.mount: Deactivated successfully. Feb 13 18:52:19.000389 kubelet[2403]: E0213 18:52:18.999780 2403 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:19.000389 kubelet[2403]: E0213 18:52:18.999884 2403 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:19.000389 kubelet[2403]: E0213 18:52:18.999921 2403 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:19.000563 kubelet[2403]: E0213 18:52:19.000030 2403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rknk4_calico-system(0446ee6e-94f2-402c-a109-4fa0a50e3591)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rknk4_calico-system(0446ee6e-94f2-402c-a109-4fa0a50e3591)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rknk4" podUID="0446ee6e-94f2-402c-a109-4fa0a50e3591" Feb 13 18:52:19.577926 kubelet[2403]: E0213 18:52:19.577501 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:19.889204 kubelet[2403]: I0213 18:52:19.887540 2403 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837" Feb 13 18:52:19.889364 containerd[1954]: time="2025-02-13T18:52:19.888561617Z" level=info msg="StopPodSandbox for \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\"" Feb 13 18:52:19.889364 containerd[1954]: time="2025-02-13T18:52:19.888819740Z" level=info msg="Ensure that sandbox b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837 in task-service has been cleanup successfully" Feb 13 18:52:19.894142 containerd[1954]: time="2025-02-13T18:52:19.891523332Z" level=info msg="TearDown network for sandbox \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\" successfully" Feb 13 18:52:19.894142 containerd[1954]: time="2025-02-13T18:52:19.891573335Z" level=info msg="StopPodSandbox for \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\" returns successfully" Feb 13 18:52:19.894142 containerd[1954]: time="2025-02-13T18:52:19.893831973Z" level=info msg="StopPodSandbox for \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\"" Feb 13 18:52:19.893157 systemd[1]: run-netns-cni\x2d7f60aafd\x2dad91\x2deecf\x2d8940\x2d85ec1fb1c294.mount: Deactivated successfully. Feb 13 18:52:19.895863 containerd[1954]: time="2025-02-13T18:52:19.895795318Z" level=info msg="TearDown network for sandbox \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\" successfully" Feb 13 18:52:19.897382 containerd[1954]: time="2025-02-13T18:52:19.897314215Z" level=info msg="StopPodSandbox for \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\" returns successfully" Feb 13 18:52:19.899310 containerd[1954]: time="2025-02-13T18:52:19.898980349Z" level=info msg="StopPodSandbox for \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\"" Feb 13 18:52:19.899310 containerd[1954]: time="2025-02-13T18:52:19.899189656Z" level=info msg="TearDown network for sandbox \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\" successfully" Feb 13 18:52:19.899310 containerd[1954]: time="2025-02-13T18:52:19.899214856Z" level=info msg="StopPodSandbox for \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\" returns successfully" Feb 13 18:52:19.901194 containerd[1954]: time="2025-02-13T18:52:19.900678208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rknk4,Uid:0446ee6e-94f2-402c-a109-4fa0a50e3591,Namespace:calico-system,Attempt:3,}" Feb 13 18:52:20.102000 containerd[1954]: time="2025-02-13T18:52:20.101793672Z" level=error msg="Failed to destroy network for sandbox \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:20.104774 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272-shm.mount: Deactivated successfully. Feb 13 18:52:20.105429 containerd[1954]: time="2025-02-13T18:52:20.105245582Z" level=error msg="encountered an error cleaning up failed sandbox \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:20.106074 containerd[1954]: time="2025-02-13T18:52:20.105372070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rknk4,Uid:0446ee6e-94f2-402c-a109-4fa0a50e3591,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:20.109159 kubelet[2403]: E0213 18:52:20.108969 2403 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:20.109159 kubelet[2403]: E0213 18:52:20.109051 2403 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:20.109159 kubelet[2403]: E0213 18:52:20.109087 2403 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:20.109755 kubelet[2403]: E0213 18:52:20.109174 2403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rknk4_calico-system(0446ee6e-94f2-402c-a109-4fa0a50e3591)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rknk4_calico-system(0446ee6e-94f2-402c-a109-4fa0a50e3591)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rknk4" podUID="0446ee6e-94f2-402c-a109-4fa0a50e3591" Feb 13 18:52:20.578352 kubelet[2403]: E0213 18:52:20.578274 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:20.894733 kubelet[2403]: I0213 18:52:20.894594 2403 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272" Feb 13 18:52:20.897021 containerd[1954]: time="2025-02-13T18:52:20.896430657Z" level=info msg="StopPodSandbox for \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\"" Feb 13 18:52:20.897021 containerd[1954]: time="2025-02-13T18:52:20.896718082Z" level=info msg="Ensure that sandbox 19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272 in task-service has been cleanup successfully" Feb 13 18:52:20.902046 containerd[1954]: time="2025-02-13T18:52:20.901492521Z" level=info msg="TearDown network for sandbox \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\" successfully" Feb 13 18:52:20.902046 containerd[1954]: time="2025-02-13T18:52:20.901562410Z" level=info msg="StopPodSandbox for \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\" returns successfully" Feb 13 18:52:20.901912 systemd[1]: run-netns-cni\x2daf3ebb64\x2d66f6\x2dd75b\x2d683f\x2d6459dc357e49.mount: Deactivated successfully. Feb 13 18:52:20.904876 containerd[1954]: time="2025-02-13T18:52:20.903901624Z" level=info msg="StopPodSandbox for \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\"" Feb 13 18:52:20.905068 containerd[1954]: time="2025-02-13T18:52:20.904073846Z" level=info msg="TearDown network for sandbox \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\" successfully" Feb 13 18:52:20.905573 containerd[1954]: time="2025-02-13T18:52:20.905532508Z" level=info msg="StopPodSandbox for \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\" returns successfully" Feb 13 18:52:20.906653 containerd[1954]: time="2025-02-13T18:52:20.906545630Z" level=info msg="StopPodSandbox for \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\"" Feb 13 18:52:20.907249 containerd[1954]: time="2025-02-13T18:52:20.907212568Z" level=info msg="TearDown network for sandbox \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\" successfully" Feb 13 18:52:20.907650 containerd[1954]: time="2025-02-13T18:52:20.907374739Z" level=info msg="StopPodSandbox for \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\" returns successfully" Feb 13 18:52:20.908260 containerd[1954]: time="2025-02-13T18:52:20.908198535Z" level=info msg="StopPodSandbox for \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\"" Feb 13 18:52:20.908860 containerd[1954]: time="2025-02-13T18:52:20.908662606Z" level=info msg="TearDown network for sandbox \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\" successfully" Feb 13 18:52:20.908860 containerd[1954]: time="2025-02-13T18:52:20.908721521Z" level=info msg="StopPodSandbox for \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\" returns successfully" Feb 13 18:52:20.910520 containerd[1954]: time="2025-02-13T18:52:20.910078546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rknk4,Uid:0446ee6e-94f2-402c-a109-4fa0a50e3591,Namespace:calico-system,Attempt:4,}" Feb 13 18:52:21.072884 containerd[1954]: time="2025-02-13T18:52:21.072700179Z" level=error msg="Failed to destroy network for sandbox \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:21.075996 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286-shm.mount: Deactivated successfully. Feb 13 18:52:21.076494 containerd[1954]: time="2025-02-13T18:52:21.076004682Z" level=error msg="encountered an error cleaning up failed sandbox \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:21.076494 containerd[1954]: time="2025-02-13T18:52:21.076171363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rknk4,Uid:0446ee6e-94f2-402c-a109-4fa0a50e3591,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:21.077082 kubelet[2403]: E0213 18:52:21.076660 2403 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:21.077082 kubelet[2403]: E0213 18:52:21.076736 2403 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:21.077082 kubelet[2403]: E0213 18:52:21.076771 2403 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:21.077430 kubelet[2403]: E0213 18:52:21.076850 2403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rknk4_calico-system(0446ee6e-94f2-402c-a109-4fa0a50e3591)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rknk4_calico-system(0446ee6e-94f2-402c-a109-4fa0a50e3591)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rknk4" podUID="0446ee6e-94f2-402c-a109-4fa0a50e3591" Feb 13 18:52:21.578899 kubelet[2403]: E0213 18:52:21.578807 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:21.906148 kubelet[2403]: I0213 18:52:21.905459 2403 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286" Feb 13 18:52:21.906962 containerd[1954]: time="2025-02-13T18:52:21.906601095Z" level=info msg="StopPodSandbox for \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\"" Feb 13 18:52:21.906962 containerd[1954]: time="2025-02-13T18:52:21.906921742Z" level=info msg="Ensure that sandbox 4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286 in task-service has been cleanup successfully" Feb 13 18:52:21.907792 containerd[1954]: time="2025-02-13T18:52:21.907669712Z" level=info msg="TearDown network for sandbox \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\" successfully" Feb 13 18:52:21.907792 containerd[1954]: time="2025-02-13T18:52:21.907717424Z" level=info msg="StopPodSandbox for \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\" returns successfully" Feb 13 18:52:21.913356 containerd[1954]: time="2025-02-13T18:52:21.911795816Z" level=info msg="StopPodSandbox for \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\"" Feb 13 18:52:21.913356 containerd[1954]: time="2025-02-13T18:52:21.911983426Z" level=info msg="TearDown network for sandbox \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\" successfully" Feb 13 18:52:21.913356 containerd[1954]: time="2025-02-13T18:52:21.912014143Z" level=info msg="StopPodSandbox for \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\" returns successfully" Feb 13 18:52:21.912684 systemd[1]: run-netns-cni\x2db27c3638\x2d6541\x2d5552\x2d0bf3\x2d4522d9d02c3b.mount: Deactivated successfully. Feb 13 18:52:21.916204 containerd[1954]: time="2025-02-13T18:52:21.916128505Z" level=info msg="StopPodSandbox for \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\"" Feb 13 18:52:21.916892 containerd[1954]: time="2025-02-13T18:52:21.916324319Z" level=info msg="TearDown network for sandbox \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\" successfully" Feb 13 18:52:21.916892 containerd[1954]: time="2025-02-13T18:52:21.916353380Z" level=info msg="StopPodSandbox for \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\" returns successfully" Feb 13 18:52:21.918150 containerd[1954]: time="2025-02-13T18:52:21.918046093Z" level=info msg="StopPodSandbox for \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\"" Feb 13 18:52:21.918275 containerd[1954]: time="2025-02-13T18:52:21.918251299Z" level=info msg="TearDown network for sandbox \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\" successfully" Feb 13 18:52:21.918330 containerd[1954]: time="2025-02-13T18:52:21.918282291Z" level=info msg="StopPodSandbox for \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\" returns successfully" Feb 13 18:52:21.919324 containerd[1954]: time="2025-02-13T18:52:21.919264504Z" level=info msg="StopPodSandbox for \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\"" Feb 13 18:52:21.919489 containerd[1954]: time="2025-02-13T18:52:21.919441055Z" level=info msg="TearDown network for sandbox \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\" successfully" Feb 13 18:52:21.919489 containerd[1954]: time="2025-02-13T18:52:21.919467010Z" level=info msg="StopPodSandbox for \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\" returns successfully" Feb 13 18:52:21.921656 containerd[1954]: time="2025-02-13T18:52:21.921578590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rknk4,Uid:0446ee6e-94f2-402c-a109-4fa0a50e3591,Namespace:calico-system,Attempt:5,}" Feb 13 18:52:22.076995 containerd[1954]: time="2025-02-13T18:52:22.076926138Z" level=error msg="Failed to destroy network for sandbox \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:22.080887 containerd[1954]: time="2025-02-13T18:52:22.080539199Z" level=error msg="encountered an error cleaning up failed sandbox \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:22.080887 containerd[1954]: time="2025-02-13T18:52:22.080673471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rknk4,Uid:0446ee6e-94f2-402c-a109-4fa0a50e3591,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:22.082841 kubelet[2403]: E0213 18:52:22.081309 2403 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:22.082841 kubelet[2403]: E0213 18:52:22.081393 2403 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:22.082841 kubelet[2403]: E0213 18:52:22.081429 2403 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:22.081914 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913-shm.mount: Deactivated successfully. Feb 13 18:52:22.084225 kubelet[2403]: E0213 18:52:22.083580 2403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rknk4_calico-system(0446ee6e-94f2-402c-a109-4fa0a50e3591)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rknk4_calico-system(0446ee6e-94f2-402c-a109-4fa0a50e3591)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rknk4" podUID="0446ee6e-94f2-402c-a109-4fa0a50e3591" Feb 13 18:52:22.580226 kubelet[2403]: E0213 18:52:22.580180 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:22.885473 systemd[1]: Created slice kubepods-besteffort-pod21ef90db_b2fe_4e03_8456_7a3fe3670dc6.slice - libcontainer container kubepods-besteffort-pod21ef90db_b2fe_4e03_8456_7a3fe3670dc6.slice. Feb 13 18:52:22.887766 kubelet[2403]: I0213 18:52:22.887699 2403 status_manager.go:890] "Failed to get status for pod" podUID="21ef90db-b2fe-4e03-8456-7a3fe3670dc6" pod="default/nginx-deployment-7fcdb87857-csm8d" err="pods \"nginx-deployment-7fcdb87857-csm8d\" is forbidden: User \"system:node:172.31.25.248\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node '172.31.25.248' and this object" Feb 13 18:52:22.890136 kubelet[2403]: W0213 18:52:22.888560 2403 reflector.go:569] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172.31.25.248" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '172.31.25.248' and this object Feb 13 18:52:22.890136 kubelet[2403]: E0213 18:52:22.888876 2403 reflector.go:166] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:172.31.25.248\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node '172.31.25.248' and this object" logger="UnhandledError" Feb 13 18:52:22.915469 kubelet[2403]: I0213 18:52:22.915206 2403 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913" Feb 13 18:52:22.915469 kubelet[2403]: I0213 18:52:22.915282 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgfmd\" (UniqueName: \"kubernetes.io/projected/21ef90db-b2fe-4e03-8456-7a3fe3670dc6-kube-api-access-cgfmd\") pod \"nginx-deployment-7fcdb87857-csm8d\" (UID: \"21ef90db-b2fe-4e03-8456-7a3fe3670dc6\") " pod="default/nginx-deployment-7fcdb87857-csm8d" Feb 13 18:52:22.916911 containerd[1954]: time="2025-02-13T18:52:22.916854026Z" level=info msg="StopPodSandbox for \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\"" Feb 13 18:52:22.919143 containerd[1954]: time="2025-02-13T18:52:22.918658428Z" level=info msg="Ensure that sandbox 6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913 in task-service has been cleanup successfully" Feb 13 18:52:22.926031 containerd[1954]: time="2025-02-13T18:52:22.924361503Z" level=info msg="TearDown network for sandbox \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\" successfully" Feb 13 18:52:22.926031 containerd[1954]: time="2025-02-13T18:52:22.924414169Z" level=info msg="StopPodSandbox for \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\" returns successfully" Feb 13 18:52:22.924646 systemd[1]: run-netns-cni\x2d5e373182\x2d8aa0\x2dcbfa\x2d3d5f\x2d383ed18a282e.mount: Deactivated successfully. Feb 13 18:52:22.929236 containerd[1954]: time="2025-02-13T18:52:22.928076249Z" level=info msg="StopPodSandbox for \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\"" Feb 13 18:52:22.929236 containerd[1954]: time="2025-02-13T18:52:22.928337766Z" level=info msg="TearDown network for sandbox \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\" successfully" Feb 13 18:52:22.929236 containerd[1954]: time="2025-02-13T18:52:22.928367307Z" level=info msg="StopPodSandbox for \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\" returns successfully" Feb 13 18:52:22.929516 containerd[1954]: time="2025-02-13T18:52:22.929303727Z" level=info msg="StopPodSandbox for \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\"" Feb 13 18:52:22.929516 containerd[1954]: time="2025-02-13T18:52:22.929459649Z" level=info msg="TearDown network for sandbox \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\" successfully" Feb 13 18:52:22.929516 containerd[1954]: time="2025-02-13T18:52:22.929481993Z" level=info msg="StopPodSandbox for \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\" returns successfully" Feb 13 18:52:22.930738 containerd[1954]: time="2025-02-13T18:52:22.930667516Z" level=info msg="StopPodSandbox for \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\"" Feb 13 18:52:22.930958 containerd[1954]: time="2025-02-13T18:52:22.930824338Z" level=info msg="TearDown network for sandbox \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\" successfully" Feb 13 18:52:22.930958 containerd[1954]: time="2025-02-13T18:52:22.930847606Z" level=info msg="StopPodSandbox for \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\" returns successfully" Feb 13 18:52:22.931944 containerd[1954]: time="2025-02-13T18:52:22.931466041Z" level=info msg="StopPodSandbox for \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\"" Feb 13 18:52:22.932388 containerd[1954]: time="2025-02-13T18:52:22.932252116Z" level=info msg="TearDown network for sandbox \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\" successfully" Feb 13 18:52:22.932388 containerd[1954]: time="2025-02-13T18:52:22.932289045Z" level=info msg="StopPodSandbox for \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\" returns successfully" Feb 13 18:52:22.933886 containerd[1954]: time="2025-02-13T18:52:22.933717686Z" level=info msg="StopPodSandbox for \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\"" Feb 13 18:52:22.933999 containerd[1954]: time="2025-02-13T18:52:22.933900019Z" level=info msg="TearDown network for sandbox \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\" successfully" Feb 13 18:52:22.933999 containerd[1954]: time="2025-02-13T18:52:22.933922880Z" level=info msg="StopPodSandbox for \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\" returns successfully" Feb 13 18:52:22.935871 containerd[1954]: time="2025-02-13T18:52:22.935571611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rknk4,Uid:0446ee6e-94f2-402c-a109-4fa0a50e3591,Namespace:calico-system,Attempt:6,}" Feb 13 18:52:23.078476 containerd[1954]: time="2025-02-13T18:52:23.078275636Z" level=error msg="Failed to destroy network for sandbox \"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:23.079074 containerd[1954]: time="2025-02-13T18:52:23.079023486Z" level=error msg="encountered an error cleaning up failed sandbox \"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:23.081150 containerd[1954]: time="2025-02-13T18:52:23.079306196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rknk4,Uid:0446ee6e-94f2-402c-a109-4fa0a50e3591,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:23.081300 kubelet[2403]: E0213 18:52:23.079595 2403 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:23.081300 kubelet[2403]: E0213 18:52:23.079665 2403 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:23.081300 kubelet[2403]: E0213 18:52:23.079699 2403 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:23.081475 kubelet[2403]: E0213 18:52:23.079764 2403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rknk4_calico-system(0446ee6e-94f2-402c-a109-4fa0a50e3591)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rknk4_calico-system(0446ee6e-94f2-402c-a109-4fa0a50e3591)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rknk4" podUID="0446ee6e-94f2-402c-a109-4fa0a50e3591" Feb 13 18:52:23.083483 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd-shm.mount: Deactivated successfully. Feb 13 18:52:23.559648 kubelet[2403]: E0213 18:52:23.559589 2403 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:23.581663 kubelet[2403]: E0213 18:52:23.581514 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:23.924123 kubelet[2403]: I0213 18:52:23.922544 2403 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd" Feb 13 18:52:23.924281 containerd[1954]: time="2025-02-13T18:52:23.923815289Z" level=info msg="StopPodSandbox for \"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\"" Feb 13 18:52:23.924281 containerd[1954]: time="2025-02-13T18:52:23.924077190Z" level=info msg="Ensure that sandbox 59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd in task-service has been cleanup successfully" Feb 13 18:52:23.928127 containerd[1954]: time="2025-02-13T18:52:23.926453033Z" level=info msg="TearDown network for sandbox \"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\" successfully" Feb 13 18:52:23.928127 containerd[1954]: time="2025-02-13T18:52:23.926509969Z" level=info msg="StopPodSandbox for \"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\" returns successfully" Feb 13 18:52:23.928127 containerd[1954]: time="2025-02-13T18:52:23.927032008Z" level=info msg="StopPodSandbox for \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\"" Feb 13 18:52:23.928127 containerd[1954]: time="2025-02-13T18:52:23.927218502Z" level=info msg="TearDown network for sandbox \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\" successfully" Feb 13 18:52:23.928127 containerd[1954]: time="2025-02-13T18:52:23.927241555Z" level=info msg="StopPodSandbox for \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\" returns successfully" Feb 13 18:52:23.927686 systemd[1]: run-netns-cni\x2df7ced1a4\x2d1d43\x2d30f4\x2d5f07\x2d38a7503793b4.mount: Deactivated successfully. Feb 13 18:52:23.929946 containerd[1954]: time="2025-02-13T18:52:23.929877836Z" level=info msg="StopPodSandbox for \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\"" Feb 13 18:52:23.930072 containerd[1954]: time="2025-02-13T18:52:23.930043353Z" level=info msg="TearDown network for sandbox \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\" successfully" Feb 13 18:52:23.930183 containerd[1954]: time="2025-02-13T18:52:23.930067485Z" level=info msg="StopPodSandbox for \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\" returns successfully" Feb 13 18:52:23.930880 containerd[1954]: time="2025-02-13T18:52:23.930821500Z" level=info msg="StopPodSandbox for \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\"" Feb 13 18:52:23.931018 containerd[1954]: time="2025-02-13T18:52:23.930976834Z" level=info msg="TearDown network for sandbox \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\" successfully" Feb 13 18:52:23.931083 containerd[1954]: time="2025-02-13T18:52:23.931011905Z" level=info msg="StopPodSandbox for \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\" returns successfully" Feb 13 18:52:23.931725 containerd[1954]: time="2025-02-13T18:52:23.931672259Z" level=info msg="StopPodSandbox for \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\"" Feb 13 18:52:23.931922 containerd[1954]: time="2025-02-13T18:52:23.931849086Z" level=info msg="TearDown network for sandbox \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\" successfully" Feb 13 18:52:23.931922 containerd[1954]: time="2025-02-13T18:52:23.931883233Z" level=info msg="StopPodSandbox for \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\" returns successfully" Feb 13 18:52:23.932503 containerd[1954]: time="2025-02-13T18:52:23.932408138Z" level=info msg="StopPodSandbox for \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\"" Feb 13 18:52:23.932610 containerd[1954]: time="2025-02-13T18:52:23.932555617Z" level=info msg="TearDown network for sandbox \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\" successfully" Feb 13 18:52:23.932610 containerd[1954]: time="2025-02-13T18:52:23.932577530Z" level=info msg="StopPodSandbox for \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\" returns successfully" Feb 13 18:52:23.933455 containerd[1954]: time="2025-02-13T18:52:23.933392054Z" level=info msg="StopPodSandbox for \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\"" Feb 13 18:52:23.933588 containerd[1954]: time="2025-02-13T18:52:23.933557020Z" level=info msg="TearDown network for sandbox \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\" successfully" Feb 13 18:52:23.933588 containerd[1954]: time="2025-02-13T18:52:23.933579616Z" level=info msg="StopPodSandbox for \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\" returns successfully" Feb 13 18:52:23.934588 containerd[1954]: time="2025-02-13T18:52:23.934529133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rknk4,Uid:0446ee6e-94f2-402c-a109-4fa0a50e3591,Namespace:calico-system,Attempt:7,}" Feb 13 18:52:24.032852 kubelet[2403]: E0213 18:52:24.032793 2403 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 13 18:52:24.032852 kubelet[2403]: E0213 18:52:24.032854 2403 projected.go:194] Error preparing data for projected volume kube-api-access-cgfmd for pod default/nginx-deployment-7fcdb87857-csm8d: failed to sync configmap cache: timed out waiting for the condition Feb 13 18:52:24.033227 kubelet[2403]: E0213 18:52:24.032981 2403 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21ef90db-b2fe-4e03-8456-7a3fe3670dc6-kube-api-access-cgfmd podName:21ef90db-b2fe-4e03-8456-7a3fe3670dc6 nodeName:}" failed. No retries permitted until 2025-02-13 18:52:24.532945229 +0000 UTC m=+22.588656374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cgfmd" (UniqueName: "kubernetes.io/projected/21ef90db-b2fe-4e03-8456-7a3fe3670dc6-kube-api-access-cgfmd") pod "nginx-deployment-7fcdb87857-csm8d" (UID: "21ef90db-b2fe-4e03-8456-7a3fe3670dc6") : failed to sync configmap cache: timed out waiting for the condition Feb 13 18:52:24.074148 containerd[1954]: time="2025-02-13T18:52:24.074049639Z" level=error msg="Failed to destroy network for sandbox \"6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:24.078459 containerd[1954]: time="2025-02-13T18:52:24.077495119Z" level=error msg="encountered an error cleaning up failed sandbox \"6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:24.078459 containerd[1954]: time="2025-02-13T18:52:24.077605020Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rknk4,Uid:0446ee6e-94f2-402c-a109-4fa0a50e3591,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:24.078377 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78-shm.mount: Deactivated successfully. Feb 13 18:52:24.078753 kubelet[2403]: E0213 18:52:24.077900 2403 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:24.078753 kubelet[2403]: E0213 18:52:24.077978 2403 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:24.078753 kubelet[2403]: E0213 18:52:24.078014 2403 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rknk4" Feb 13 18:52:24.078949 kubelet[2403]: E0213 18:52:24.078076 2403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rknk4_calico-system(0446ee6e-94f2-402c-a109-4fa0a50e3591)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rknk4_calico-system(0446ee6e-94f2-402c-a109-4fa0a50e3591)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rknk4" podUID="0446ee6e-94f2-402c-a109-4fa0a50e3591" Feb 13 18:52:24.412734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1151941769.mount: Deactivated successfully. Feb 13 18:52:24.489190 containerd[1954]: time="2025-02-13T18:52:24.488315719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:24.490753 containerd[1954]: time="2025-02-13T18:52:24.490662429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 18:52:24.492165 containerd[1954]: time="2025-02-13T18:52:24.492035227Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:24.496934 containerd[1954]: time="2025-02-13T18:52:24.496857354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:24.498860 containerd[1954]: time="2025-02-13T18:52:24.498440898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.621653621s" Feb 13 18:52:24.498860 containerd[1954]: time="2025-02-13T18:52:24.498500976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 18:52:24.520420 containerd[1954]: time="2025-02-13T18:52:24.520261978Z" level=info msg="CreateContainer within sandbox \"1d6f6d418f11f8ae9787cc52bde685619fcf0c3ff953eeab6954e022c5dfb09f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 18:52:24.542480 containerd[1954]: time="2025-02-13T18:52:24.542393955Z" level=info msg="CreateContainer within sandbox \"1d6f6d418f11f8ae9787cc52bde685619fcf0c3ff953eeab6954e022c5dfb09f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e6bf1acc00b6f096a531a42a5231eee80b78aee46e79212f60b6649219e2ad13\"" Feb 13 18:52:24.543806 containerd[1954]: time="2025-02-13T18:52:24.543749169Z" level=info msg="StartContainer for \"e6bf1acc00b6f096a531a42a5231eee80b78aee46e79212f60b6649219e2ad13\"" Feb 13 18:52:24.582543 kubelet[2403]: E0213 18:52:24.582477 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:24.589423 systemd[1]: Started cri-containerd-e6bf1acc00b6f096a531a42a5231eee80b78aee46e79212f60b6649219e2ad13.scope - libcontainer container e6bf1acc00b6f096a531a42a5231eee80b78aee46e79212f60b6649219e2ad13. Feb 13 18:52:24.650065 containerd[1954]: time="2025-02-13T18:52:24.650000093Z" level=info msg="StartContainer for \"e6bf1acc00b6f096a531a42a5231eee80b78aee46e79212f60b6649219e2ad13\" returns successfully" Feb 13 18:52:24.694546 containerd[1954]: time="2025-02-13T18:52:24.694357923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-csm8d,Uid:21ef90db-b2fe-4e03-8456-7a3fe3670dc6,Namespace:default,Attempt:0,}" Feb 13 18:52:24.785138 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 18:52:24.785286 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 18:52:24.815871 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 18:52:24.850275 containerd[1954]: time="2025-02-13T18:52:24.850081279Z" level=error msg="Failed to destroy network for sandbox \"8bc8827d0986f988a4cf7af7754474aa987105dc08d7a6da8aba2a45c6adf849\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:24.851216 containerd[1954]: time="2025-02-13T18:52:24.851002430Z" level=error msg="encountered an error cleaning up failed sandbox \"8bc8827d0986f988a4cf7af7754474aa987105dc08d7a6da8aba2a45c6adf849\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:24.851540 containerd[1954]: time="2025-02-13T18:52:24.851181537Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-csm8d,Uid:21ef90db-b2fe-4e03-8456-7a3fe3670dc6,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8bc8827d0986f988a4cf7af7754474aa987105dc08d7a6da8aba2a45c6adf849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:24.852191 kubelet[2403]: E0213 18:52:24.851992 2403 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bc8827d0986f988a4cf7af7754474aa987105dc08d7a6da8aba2a45c6adf849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 18:52:24.852191 kubelet[2403]: E0213 18:52:24.852074 2403 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bc8827d0986f988a4cf7af7754474aa987105dc08d7a6da8aba2a45c6adf849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-csm8d" Feb 13 18:52:24.852191 kubelet[2403]: E0213 18:52:24.852137 2403 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bc8827d0986f988a4cf7af7754474aa987105dc08d7a6da8aba2a45c6adf849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-csm8d" Feb 13 18:52:24.852744 kubelet[2403]: E0213 18:52:24.852614 2403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-csm8d_default(21ef90db-b2fe-4e03-8456-7a3fe3670dc6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-csm8d_default(21ef90db-b2fe-4e03-8456-7a3fe3670dc6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bc8827d0986f988a4cf7af7754474aa987105dc08d7a6da8aba2a45c6adf849\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-csm8d" podUID="21ef90db-b2fe-4e03-8456-7a3fe3670dc6" Feb 13 18:52:24.949598 kubelet[2403]: I0213 18:52:24.949450 2403 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78" Feb 13 18:52:24.953334 containerd[1954]: time="2025-02-13T18:52:24.952141680Z" level=info msg="StopPodSandbox for \"6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78\"" Feb 13 18:52:24.953334 containerd[1954]: time="2025-02-13T18:52:24.952440294Z" level=info msg="Ensure that sandbox 6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78 in task-service has been cleanup successfully" Feb 13 18:52:24.957020 containerd[1954]: time="2025-02-13T18:52:24.956705193Z" level=info msg="TearDown network for sandbox \"6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78\" successfully" Feb 13 18:52:24.957020 containerd[1954]: time="2025-02-13T18:52:24.956758854Z" level=info msg="StopPodSandbox for \"6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78\" returns successfully" Feb 13 18:52:24.957894 systemd[1]: run-netns-cni\x2dfe532132\x2d4f03\x2d6da7\x2d1be6\x2d6cd0e532a4be.mount: Deactivated successfully. Feb 13 18:52:24.961842 containerd[1954]: time="2025-02-13T18:52:24.960897120Z" level=info msg="StopPodSandbox for \"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\"" Feb 13 18:52:24.962362 containerd[1954]: time="2025-02-13T18:52:24.962224508Z" level=info msg="TearDown network for sandbox \"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\" successfully" Feb 13 18:52:24.962362 containerd[1954]: time="2025-02-13T18:52:24.962279980Z" level=info msg="StopPodSandbox for \"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\" returns successfully" Feb 13 18:52:24.963383 kubelet[2403]: I0213 18:52:24.963018 2403 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bc8827d0986f988a4cf7af7754474aa987105dc08d7a6da8aba2a45c6adf849" Feb 13 18:52:24.964673 containerd[1954]: time="2025-02-13T18:52:24.963870480Z" level=info msg="StopPodSandbox for \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\"" Feb 13 18:52:24.964673 containerd[1954]: time="2025-02-13T18:52:24.964046384Z" level=info msg="TearDown network for sandbox \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\" successfully" Feb 13 18:52:24.964673 containerd[1954]: time="2025-02-13T18:52:24.964070504Z" level=info msg="StopPodSandbox for \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\" returns successfully" Feb 13 18:52:24.966406 containerd[1954]: time="2025-02-13T18:52:24.965915613Z" level=info msg="StopPodSandbox for \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\"" Feb 13 18:52:24.966406 containerd[1954]: time="2025-02-13T18:52:24.966130654Z" level=info msg="TearDown network for sandbox \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\" successfully" Feb 13 18:52:24.966406 containerd[1954]: time="2025-02-13T18:52:24.966158156Z" level=info msg="StopPodSandbox for \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\" returns successfully" Feb 13 18:52:24.967839 containerd[1954]: time="2025-02-13T18:52:24.965915337Z" level=info msg="StopPodSandbox for \"8bc8827d0986f988a4cf7af7754474aa987105dc08d7a6da8aba2a45c6adf849\"" Feb 13 18:52:24.969222 containerd[1954]: time="2025-02-13T18:52:24.969006851Z" level=info msg="Ensure that sandbox 8bc8827d0986f988a4cf7af7754474aa987105dc08d7a6da8aba2a45c6adf849 in task-service has been cleanup successfully" Feb 13 18:52:24.969965 containerd[1954]: time="2025-02-13T18:52:24.969712530Z" level=info msg="TearDown network for sandbox \"8bc8827d0986f988a4cf7af7754474aa987105dc08d7a6da8aba2a45c6adf849\" successfully" Feb 13 18:52:24.969965 containerd[1954]: time="2025-02-13T18:52:24.969784086Z" level=info msg="StopPodSandbox for \"8bc8827d0986f988a4cf7af7754474aa987105dc08d7a6da8aba2a45c6adf849\" returns successfully" Feb 13 18:52:24.972521 containerd[1954]: time="2025-02-13T18:52:24.971480278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-csm8d,Uid:21ef90db-b2fe-4e03-8456-7a3fe3670dc6,Namespace:default,Attempt:1,}" Feb 13 18:52:24.974591 containerd[1954]: time="2025-02-13T18:52:24.974293278Z" level=info msg="StopPodSandbox for \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\"" Feb 13 18:52:24.974591 containerd[1954]: time="2025-02-13T18:52:24.974538640Z" level=info msg="TearDown network for sandbox \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\" successfully" Feb 13 18:52:24.974953 containerd[1954]: time="2025-02-13T18:52:24.974787947Z" level=info msg="StopPodSandbox for \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\" returns successfully" Feb 13 18:52:24.976687 systemd[1]: run-netns-cni\x2d14339ff8\x2dc776\x2d5acc\x2d216f\x2d6c18e4609a56.mount: Deactivated successfully. Feb 13 18:52:24.979312 containerd[1954]: time="2025-02-13T18:52:24.977740918Z" level=info msg="StopPodSandbox for \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\"" Feb 13 18:52:24.979312 containerd[1954]: time="2025-02-13T18:52:24.977914843Z" level=info msg="TearDown network for sandbox \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\" successfully" Feb 13 18:52:24.979312 containerd[1954]: time="2025-02-13T18:52:24.977939982Z" level=info msg="StopPodSandbox for \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\" returns successfully" Feb 13 18:52:24.984963 containerd[1954]: time="2025-02-13T18:52:24.983176610Z" level=info msg="StopPodSandbox for \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\"" Feb 13 18:52:24.984963 containerd[1954]: time="2025-02-13T18:52:24.984651381Z" level=info msg="TearDown network for sandbox \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\" successfully" Feb 13 18:52:24.984963 containerd[1954]: time="2025-02-13T18:52:24.984754313Z" level=info msg="StopPodSandbox for \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\" returns successfully" Feb 13 18:52:24.986618 containerd[1954]: time="2025-02-13T18:52:24.986574499Z" level=info msg="StopPodSandbox for \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\"" Feb 13 18:52:24.988627 containerd[1954]: time="2025-02-13T18:52:24.988319062Z" level=info msg="TearDown network for sandbox \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\" successfully" Feb 13 18:52:24.988627 containerd[1954]: time="2025-02-13T18:52:24.988367734Z" level=info msg="StopPodSandbox for \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\" returns successfully" Feb 13 18:52:24.990668 containerd[1954]: time="2025-02-13T18:52:24.990265005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rknk4,Uid:0446ee6e-94f2-402c-a109-4fa0a50e3591,Namespace:calico-system,Attempt:8,}" Feb 13 18:52:25.013082 kubelet[2403]: I0213 18:52:25.009009 2403 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bsnnc" podStartSLOduration=4.557596627 podStartE2EDuration="22.008988694s" podCreationTimestamp="2025-02-13 18:52:03 +0000 UTC" firstStartedPulling="2025-02-13 18:52:07.048818906 +0000 UTC m=+5.104530039" lastFinishedPulling="2025-02-13 18:52:24.500210973 +0000 UTC m=+22.555922106" observedRunningTime="2025-02-13 18:52:25.008790145 +0000 UTC m=+23.064501302" watchObservedRunningTime="2025-02-13 18:52:25.008988694 +0000 UTC m=+23.064699827" Feb 13 18:52:25.431552 (udev-worker)[3221]: Network interface NamePolicy= disabled on kernel command line. Feb 13 18:52:25.433041 systemd-networkd[1860]: cali881b9a8a6fa: Link UP Feb 13 18:52:25.433523 systemd-networkd[1860]: cali881b9a8a6fa: Gained carrier Feb 13 18:52:25.455481 containerd[1954]: 2025-02-13 18:52:25.082 [INFO][3258] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 18:52:25.455481 containerd[1954]: 2025-02-13 18:52:25.113 [INFO][3258] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.25.248-k8s-nginx--deployment--7fcdb87857--csm8d-eth0 nginx-deployment-7fcdb87857- default 21ef90db-b2fe-4e03-8456-7a3fe3670dc6 1113 0 2025-02-13 18:52:22 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.25.248 nginx-deployment-7fcdb87857-csm8d eth0 default [] [] [kns.default ksa.default.default] cali881b9a8a6fa [] []}} ContainerID="d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1" Namespace="default" Pod="nginx-deployment-7fcdb87857-csm8d" WorkloadEndpoint="172.31.25.248-k8s-nginx--deployment--7fcdb87857--csm8d-" Feb 13 18:52:25.455481 containerd[1954]: 2025-02-13 18:52:25.114 [INFO][3258] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1" Namespace="default" Pod="nginx-deployment-7fcdb87857-csm8d" WorkloadEndpoint="172.31.25.248-k8s-nginx--deployment--7fcdb87857--csm8d-eth0" Feb 13 18:52:25.455481 containerd[1954]: 2025-02-13 18:52:25.193 [INFO][3283] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1" HandleID="k8s-pod-network.d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1" Workload="172.31.25.248-k8s-nginx--deployment--7fcdb87857--csm8d-eth0" Feb 13 18:52:25.455481 containerd[1954]: 2025-02-13 18:52:25.346 [INFO][3283] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1" HandleID="k8s-pod-network.d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1" Workload="172.31.25.248-k8s-nginx--deployment--7fcdb87857--csm8d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316810), Attrs:map[string]string{"namespace":"default", "node":"172.31.25.248", "pod":"nginx-deployment-7fcdb87857-csm8d", "timestamp":"2025-02-13 18:52:25.193612421 +0000 UTC"}, Hostname:"172.31.25.248", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 18:52:25.455481 containerd[1954]: 2025-02-13 18:52:25.347 [INFO][3283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 18:52:25.455481 containerd[1954]: 2025-02-13 18:52:25.347 [INFO][3283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 18:52:25.455481 containerd[1954]: 2025-02-13 18:52:25.347 [INFO][3283] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.25.248' Feb 13 18:52:25.455481 containerd[1954]: 2025-02-13 18:52:25.351 [INFO][3283] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1" host="172.31.25.248" Feb 13 18:52:25.455481 containerd[1954]: 2025-02-13 18:52:25.358 [INFO][3283] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.25.248" Feb 13 18:52:25.455481 containerd[1954]: 2025-02-13 18:52:25.371 [INFO][3283] ipam/ipam.go 489: Trying affinity for 192.168.92.192/26 host="172.31.25.248" Feb 13 18:52:25.455481 containerd[1954]: 2025-02-13 18:52:25.376 [INFO][3283] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.192/26 host="172.31.25.248" Feb 13 18:52:25.455481 containerd[1954]: 2025-02-13 18:52:25.381 [INFO][3283] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.192/26 host="172.31.25.248" Feb 13 18:52:25.455481 containerd[1954]: 2025-02-13 18:52:25.381 [INFO][3283] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.192/26 handle="k8s-pod-network.d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1" host="172.31.25.248" Feb 13 18:52:25.455481 containerd[1954]: 2025-02-13 18:52:25.388 [INFO][3283] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1 Feb 13 18:52:25.455481 containerd[1954]: 2025-02-13 18:52:25.403 [INFO][3283] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.192/26 handle="k8s-pod-network.d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1" host="172.31.25.248" Feb 13 18:52:25.455481 containerd[1954]: 2025-02-13 18:52:25.415 [INFO][3283] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.193/26] block=192.168.92.192/26 handle="k8s-pod-network.d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1" host="172.31.25.248" Feb 13 18:52:25.455481 containerd[1954]: 2025-02-13 18:52:25.416 [INFO][3283] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.193/26] handle="k8s-pod-network.d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1" host="172.31.25.248" Feb 13 18:52:25.455481 containerd[1954]: 2025-02-13 18:52:25.416 [INFO][3283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 18:52:25.455481 containerd[1954]: 2025-02-13 18:52:25.416 [INFO][3283] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.193/26] IPv6=[] ContainerID="d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1" HandleID="k8s-pod-network.d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1" Workload="172.31.25.248-k8s-nginx--deployment--7fcdb87857--csm8d-eth0" Feb 13 18:52:25.456817 containerd[1954]: 2025-02-13 18:52:25.421 [INFO][3258] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1" Namespace="default" Pod="nginx-deployment-7fcdb87857-csm8d" WorkloadEndpoint="172.31.25.248-k8s-nginx--deployment--7fcdb87857--csm8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.248-k8s-nginx--deployment--7fcdb87857--csm8d-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"21ef90db-b2fe-4e03-8456-7a3fe3670dc6", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 18, 52, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.248", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-csm8d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.92.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali881b9a8a6fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 18:52:25.456817 containerd[1954]: 2025-02-13 18:52:25.421 [INFO][3258] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.193/32] ContainerID="d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1" Namespace="default" Pod="nginx-deployment-7fcdb87857-csm8d" WorkloadEndpoint="172.31.25.248-k8s-nginx--deployment--7fcdb87857--csm8d-eth0" Feb 13 18:52:25.456817 containerd[1954]: 2025-02-13 18:52:25.422 [INFO][3258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali881b9a8a6fa ContainerID="d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1" Namespace="default" Pod="nginx-deployment-7fcdb87857-csm8d" WorkloadEndpoint="172.31.25.248-k8s-nginx--deployment--7fcdb87857--csm8d-eth0" Feb 13 18:52:25.456817 containerd[1954]: 2025-02-13 18:52:25.434 [INFO][3258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1" Namespace="default" Pod="nginx-deployment-7fcdb87857-csm8d" WorkloadEndpoint="172.31.25.248-k8s-nginx--deployment--7fcdb87857--csm8d-eth0" Feb 13 18:52:25.456817 containerd[1954]: 2025-02-13 18:52:25.435 [INFO][3258] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1" Namespace="default" Pod="nginx-deployment-7fcdb87857-csm8d" WorkloadEndpoint="172.31.25.248-k8s-nginx--deployment--7fcdb87857--csm8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.248-k8s-nginx--deployment--7fcdb87857--csm8d-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"21ef90db-b2fe-4e03-8456-7a3fe3670dc6", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 18, 52, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.248", ContainerID:"d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1", Pod:"nginx-deployment-7fcdb87857-csm8d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.92.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali881b9a8a6fa", MAC:"d2:d5:10:59:e7:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 18:52:25.456817 containerd[1954]: 2025-02-13 18:52:25.451 [INFO][3258] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1" Namespace="default" Pod="nginx-deployment-7fcdb87857-csm8d" WorkloadEndpoint="172.31.25.248-k8s-nginx--deployment--7fcdb87857--csm8d-eth0" Feb 13 18:52:25.495617 containerd[1954]: time="2025-02-13T18:52:25.495448652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:52:25.496081 containerd[1954]: time="2025-02-13T18:52:25.495880256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:52:25.496081 containerd[1954]: time="2025-02-13T18:52:25.496054421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:52:25.497341 containerd[1954]: time="2025-02-13T18:52:25.497166576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:52:25.533666 systemd-networkd[1860]: cali5e9101f5fa5: Link UP Feb 13 18:52:25.535047 systemd-networkd[1860]: cali5e9101f5fa5: Gained carrier Feb 13 18:52:25.537948 systemd[1]: Started cri-containerd-d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1.scope - libcontainer container d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1. Feb 13 18:52:25.569872 containerd[1954]: 2025-02-13 18:52:25.091 [INFO][3269] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 18:52:25.569872 containerd[1954]: 2025-02-13 18:52:25.121 [INFO][3269] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.25.248-k8s-csi--node--driver--rknk4-eth0 csi-node-driver- calico-system 0446ee6e-94f2-402c-a109-4fa0a50e3591 908 0 2025-02-13 18:52:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.25.248 csi-node-driver-rknk4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5e9101f5fa5 [] []}} ContainerID="c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713" Namespace="calico-system" Pod="csi-node-driver-rknk4" WorkloadEndpoint="172.31.25.248-k8s-csi--node--driver--rknk4-" Feb 13 18:52:25.569872 containerd[1954]: 2025-02-13 18:52:25.121 [INFO][3269] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713" Namespace="calico-system" Pod="csi-node-driver-rknk4" WorkloadEndpoint="172.31.25.248-k8s-csi--node--driver--rknk4-eth0" Feb 13 18:52:25.569872 containerd[1954]: 2025-02-13 18:52:25.191 [INFO][3287] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713" HandleID="k8s-pod-network.c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713" Workload="172.31.25.248-k8s-csi--node--driver--rknk4-eth0" Feb 13 18:52:25.569872 containerd[1954]: 2025-02-13 18:52:25.347 [INFO][3287] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713" HandleID="k8s-pod-network.c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713" Workload="172.31.25.248-k8s-csi--node--driver--rknk4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000317670), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.25.248", "pod":"csi-node-driver-rknk4", "timestamp":"2025-02-13 18:52:25.191276817 +0000 UTC"}, Hostname:"172.31.25.248", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 18:52:25.569872 containerd[1954]: 2025-02-13 18:52:25.347 [INFO][3287] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 18:52:25.569872 containerd[1954]: 2025-02-13 18:52:25.416 [INFO][3287] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 18:52:25.569872 containerd[1954]: 2025-02-13 18:52:25.416 [INFO][3287] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.25.248' Feb 13 18:52:25.569872 containerd[1954]: 2025-02-13 18:52:25.455 [INFO][3287] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713" host="172.31.25.248" Feb 13 18:52:25.569872 containerd[1954]: 2025-02-13 18:52:25.464 [INFO][3287] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.25.248" Feb 13 18:52:25.569872 containerd[1954]: 2025-02-13 18:52:25.474 [INFO][3287] ipam/ipam.go 489: Trying affinity for 192.168.92.192/26 host="172.31.25.248" Feb 13 18:52:25.569872 containerd[1954]: 2025-02-13 18:52:25.480 [INFO][3287] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.192/26 host="172.31.25.248" Feb 13 18:52:25.569872 containerd[1954]: 2025-02-13 18:52:25.485 [INFO][3287] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.192/26 host="172.31.25.248" Feb 13 18:52:25.569872 containerd[1954]: 2025-02-13 18:52:25.486 [INFO][3287] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.192/26 handle="k8s-pod-network.c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713" host="172.31.25.248" Feb 13 18:52:25.569872 containerd[1954]: 2025-02-13 18:52:25.489 [INFO][3287] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713 Feb 13 18:52:25.569872 containerd[1954]: 2025-02-13 18:52:25.500 [INFO][3287] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.192/26 handle="k8s-pod-network.c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713" host="172.31.25.248" Feb 13 18:52:25.569872 containerd[1954]: 2025-02-13 18:52:25.514 [INFO][3287] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.194/26] block=192.168.92.192/26 handle="k8s-pod-network.c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713" host="172.31.25.248" Feb 13 18:52:25.569872 containerd[1954]: 2025-02-13 18:52:25.514 [INFO][3287] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.194/26] handle="k8s-pod-network.c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713" host="172.31.25.248" Feb 13 18:52:25.569872 containerd[1954]: 2025-02-13 18:52:25.514 [INFO][3287] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 18:52:25.569872 containerd[1954]: 2025-02-13 18:52:25.514 [INFO][3287] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.194/26] IPv6=[] ContainerID="c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713" HandleID="k8s-pod-network.c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713" Workload="172.31.25.248-k8s-csi--node--driver--rknk4-eth0" Feb 13 18:52:25.571405 containerd[1954]: 2025-02-13 18:52:25.522 [INFO][3269] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713" Namespace="calico-system" Pod="csi-node-driver-rknk4" WorkloadEndpoint="172.31.25.248-k8s-csi--node--driver--rknk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.248-k8s-csi--node--driver--rknk4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0446ee6e-94f2-402c-a109-4fa0a50e3591", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 18, 52, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.248", ContainerID:"", Pod:"csi-node-driver-rknk4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5e9101f5fa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 18:52:25.571405 containerd[1954]: 2025-02-13 18:52:25.523 [INFO][3269] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.194/32] ContainerID="c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713" Namespace="calico-system" Pod="csi-node-driver-rknk4" WorkloadEndpoint="172.31.25.248-k8s-csi--node--driver--rknk4-eth0" Feb 13 18:52:25.571405 containerd[1954]: 2025-02-13 18:52:25.523 [INFO][3269] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e9101f5fa5 ContainerID="c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713" Namespace="calico-system" Pod="csi-node-driver-rknk4" WorkloadEndpoint="172.31.25.248-k8s-csi--node--driver--rknk4-eth0" Feb 13 18:52:25.571405 containerd[1954]: 2025-02-13 18:52:25.536 [INFO][3269] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713" Namespace="calico-system" Pod="csi-node-driver-rknk4" WorkloadEndpoint="172.31.25.248-k8s-csi--node--driver--rknk4-eth0" Feb 13 18:52:25.571405 containerd[1954]: 2025-02-13 18:52:25.536 [INFO][3269] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713" Namespace="calico-system" Pod="csi-node-driver-rknk4" WorkloadEndpoint="172.31.25.248-k8s-csi--node--driver--rknk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.248-k8s-csi--node--driver--rknk4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0446ee6e-94f2-402c-a109-4fa0a50e3591", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 18, 52, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.248", ContainerID:"c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713", Pod:"csi-node-driver-rknk4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5e9101f5fa5", MAC:"aa:38:50:40:c4:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 18:52:25.571405 containerd[1954]: 2025-02-13 18:52:25.563 [INFO][3269] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713" Namespace="calico-system" Pod="csi-node-driver-rknk4" WorkloadEndpoint="172.31.25.248-k8s-csi--node--driver--rknk4-eth0" Feb 13 18:52:25.583166 kubelet[2403]: E0213 18:52:25.583046 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:25.612599 containerd[1954]: time="2025-02-13T18:52:25.612436725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:52:25.613402 containerd[1954]: time="2025-02-13T18:52:25.613271663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:52:25.613675 containerd[1954]: time="2025-02-13T18:52:25.613520107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:52:25.618225 containerd[1954]: time="2025-02-13T18:52:25.618012987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:52:25.638922 containerd[1954]: time="2025-02-13T18:52:25.638690559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-csm8d,Uid:21ef90db-b2fe-4e03-8456-7a3fe3670dc6,Namespace:default,Attempt:1,} returns sandbox id \"d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1\"" Feb 13 18:52:25.642549 containerd[1954]: time="2025-02-13T18:52:25.642391368Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 18:52:25.666483 systemd[1]: Started cri-containerd-c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713.scope - libcontainer container c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713. Feb 13 18:52:25.719123 containerd[1954]: time="2025-02-13T18:52:25.717354612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rknk4,Uid:0446ee6e-94f2-402c-a109-4fa0a50e3591,Namespace:calico-system,Attempt:8,} returns sandbox id \"c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713\"" Feb 13 18:52:26.583771 kubelet[2403]: E0213 18:52:26.583712 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:26.731400 systemd-networkd[1860]: cali881b9a8a6fa: Gained IPv6LL Feb 13 18:52:26.732918 systemd-networkd[1860]: cali5e9101f5fa5: Gained IPv6LL Feb 13 18:52:26.790506 kernel: bpftool[3529]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 18:52:27.223517 systemd-networkd[1860]: vxlan.calico: Link UP Feb 13 18:52:27.223537 systemd-networkd[1860]: vxlan.calico: Gained carrier Feb 13 18:52:27.227865 (udev-worker)[3222]: Network interface NamePolicy= disabled on kernel command line. Feb 13 18:52:27.585380 kubelet[2403]: E0213 18:52:27.585201 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:27.814819 kubelet[2403]: I0213 18:52:27.814700 2403 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 18:52:27.863625 systemd[1]: run-containerd-runc-k8s.io-e6bf1acc00b6f096a531a42a5231eee80b78aee46e79212f60b6649219e2ad13-runc.zX5prS.mount: Deactivated successfully. Feb 13 18:52:28.460166 systemd-networkd[1860]: vxlan.calico: Gained IPv6LL Feb 13 18:52:28.585424 kubelet[2403]: E0213 18:52:28.585379 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:29.397917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3697937720.mount: Deactivated successfully. Feb 13 18:52:29.587243 kubelet[2403]: E0213 18:52:29.587177 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:30.587550 kubelet[2403]: E0213 18:52:30.587503 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:30.799485 containerd[1954]: time="2025-02-13T18:52:30.798998598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:30.804159 containerd[1954]: time="2025-02-13T18:52:30.802468018Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:30.804159 containerd[1954]: time="2025-02-13T18:52:30.802604318Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086" Feb 13 18:52:30.814475 containerd[1954]: time="2025-02-13T18:52:30.814404411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:30.816269 containerd[1954]: time="2025-02-13T18:52:30.816208429Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 5.173742854s" Feb 13 18:52:30.816371 containerd[1954]: time="2025-02-13T18:52:30.816266648Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 18:52:30.819608 containerd[1954]: time="2025-02-13T18:52:30.819548230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 18:52:30.820848 containerd[1954]: time="2025-02-13T18:52:30.820800056Z" level=info msg="CreateContainer within sandbox \"d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 18:52:30.845122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2218344366.mount: Deactivated successfully. Feb 13 18:52:30.849586 containerd[1954]: time="2025-02-13T18:52:30.849506843Z" level=info msg="CreateContainer within sandbox \"d5eb9dd43137a660ae1ee257f6a20c2625e00b17bb3ac83385d6f65d1af39fa1\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"609d16a1624c1d22715a2f2742f1826108440281a99e1dd3a7b1bbf231fcc64a\"" Feb 13 18:52:30.850460 containerd[1954]: time="2025-02-13T18:52:30.850406861Z" level=info msg="StartContainer for \"609d16a1624c1d22715a2f2742f1826108440281a99e1dd3a7b1bbf231fcc64a\"" Feb 13 18:52:30.906424 systemd[1]: Started cri-containerd-609d16a1624c1d22715a2f2742f1826108440281a99e1dd3a7b1bbf231fcc64a.scope - libcontainer container 609d16a1624c1d22715a2f2742f1826108440281a99e1dd3a7b1bbf231fcc64a. Feb 13 18:52:30.918732 ntpd[1926]: Listen normally on 7 vxlan.calico 192.168.92.192:123 Feb 13 18:52:30.918863 ntpd[1926]: Listen normally on 8 cali881b9a8a6fa [fe80::ecee:eeff:feee:eeee%3]:123 Feb 13 18:52:30.919313 ntpd[1926]: 13 Feb 18:52:30 ntpd[1926]: Listen normally on 7 vxlan.calico 192.168.92.192:123 Feb 13 18:52:30.919313 ntpd[1926]: 13 Feb 18:52:30 ntpd[1926]: Listen normally on 8 cali881b9a8a6fa [fe80::ecee:eeff:feee:eeee%3]:123 Feb 13 18:52:30.919313 ntpd[1926]: 13 Feb 18:52:30 ntpd[1926]: Listen normally on 9 cali5e9101f5fa5 [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 18:52:30.919313 ntpd[1926]: 13 Feb 18:52:30 ntpd[1926]: Listen normally on 10 vxlan.calico [fe80::6456:1ff:fe81:5d51%5]:123 Feb 13 18:52:30.918949 ntpd[1926]: Listen normally on 9 cali5e9101f5fa5 [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 18:52:30.919017 ntpd[1926]: Listen normally on 10 vxlan.calico [fe80::6456:1ff:fe81:5d51%5]:123 Feb 13 18:52:30.954185 containerd[1954]: time="2025-02-13T18:52:30.954113007Z" level=info msg="StartContainer for \"609d16a1624c1d22715a2f2742f1826108440281a99e1dd3a7b1bbf231fcc64a\" returns successfully" Feb 13 18:52:31.029264 kubelet[2403]: I0213 18:52:31.029186 2403 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-csm8d" podStartSLOduration=3.852068918 podStartE2EDuration="9.029166913s" podCreationTimestamp="2025-02-13 18:52:22 +0000 UTC" firstStartedPulling="2025-02-13 18:52:25.641327057 +0000 UTC m=+23.697038190" lastFinishedPulling="2025-02-13 18:52:30.818425064 +0000 UTC m=+28.874136185" observedRunningTime="2025-02-13 18:52:31.028923579 +0000 UTC m=+29.084634724" watchObservedRunningTime="2025-02-13 18:52:31.029166913 +0000 UTC m=+29.084878082" Feb 13 18:52:31.589160 kubelet[2403]: E0213 18:52:31.589066 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:32.110158 containerd[1954]: time="2025-02-13T18:52:32.109584685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:32.111336 containerd[1954]: time="2025-02-13T18:52:32.111252571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 18:52:32.112527 containerd[1954]: time="2025-02-13T18:52:32.112444882Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:32.116026 containerd[1954]: time="2025-02-13T18:52:32.115923094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:32.117677 containerd[1954]: time="2025-02-13T18:52:32.117500761Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.297888902s" Feb 13 18:52:32.117677 containerd[1954]: time="2025-02-13T18:52:32.117551340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 18:52:32.121545 containerd[1954]: time="2025-02-13T18:52:32.121475993Z" level=info msg="CreateContainer within sandbox \"c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 18:52:32.143150 containerd[1954]: time="2025-02-13T18:52:32.142964887Z" level=info msg="CreateContainer within sandbox \"c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9b9afe75da7d49becefed361fc108a1db399aa02fd99806f81dc84b8e02c7c1a\"" Feb 13 18:52:32.143668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount667518704.mount: Deactivated successfully. Feb 13 18:52:32.144813 containerd[1954]: time="2025-02-13T18:52:32.144249228Z" level=info msg="StartContainer for \"9b9afe75da7d49becefed361fc108a1db399aa02fd99806f81dc84b8e02c7c1a\"" Feb 13 18:52:32.211420 systemd[1]: Started cri-containerd-9b9afe75da7d49becefed361fc108a1db399aa02fd99806f81dc84b8e02c7c1a.scope - libcontainer container 9b9afe75da7d49becefed361fc108a1db399aa02fd99806f81dc84b8e02c7c1a. Feb 13 18:52:32.262117 containerd[1954]: time="2025-02-13T18:52:32.262022896Z" level=info msg="StartContainer for \"9b9afe75da7d49becefed361fc108a1db399aa02fd99806f81dc84b8e02c7c1a\" returns successfully" Feb 13 18:52:32.264732 containerd[1954]: time="2025-02-13T18:52:32.264578314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 18:52:32.589637 kubelet[2403]: E0213 18:52:32.589570 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:33.589982 kubelet[2403]: E0213 18:52:33.589897 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:33.993475 containerd[1954]: time="2025-02-13T18:52:33.991618259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:33.997635 containerd[1954]: time="2025-02-13T18:52:33.997148189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 18:52:33.999619 containerd[1954]: time="2025-02-13T18:52:33.998813232Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:34.010969 containerd[1954]: time="2025-02-13T18:52:34.010912455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:34.015452 containerd[1954]: time="2025-02-13T18:52:34.015391687Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.750751627s" Feb 13 18:52:34.015667 containerd[1954]: time="2025-02-13T18:52:34.015634769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 18:52:34.019296 containerd[1954]: time="2025-02-13T18:52:34.019242744Z" level=info msg="CreateContainer within sandbox \"c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 18:52:34.077871 containerd[1954]: time="2025-02-13T18:52:34.077688475Z" level=info msg="CreateContainer within sandbox \"c87ef60cb76ffb85269f3b17884a73f94bdd6731d65900ca2570bd4a4fa96713\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4fb8e27ebbb6bb96196a283ca650128822f4859c14073196d9c1df1e539bc58c\"" Feb 13 18:52:34.080026 containerd[1954]: time="2025-02-13T18:52:34.079949512Z" level=info msg="StartContainer for \"4fb8e27ebbb6bb96196a283ca650128822f4859c14073196d9c1df1e539bc58c\"" Feb 13 18:52:34.150440 systemd[1]: Started cri-containerd-4fb8e27ebbb6bb96196a283ca650128822f4859c14073196d9c1df1e539bc58c.scope - libcontainer container 4fb8e27ebbb6bb96196a283ca650128822f4859c14073196d9c1df1e539bc58c. Feb 13 18:52:34.243120 containerd[1954]: time="2025-02-13T18:52:34.242654527Z" level=info msg="StartContainer for \"4fb8e27ebbb6bb96196a283ca650128822f4859c14073196d9c1df1e539bc58c\" returns successfully" Feb 13 18:52:34.590229 kubelet[2403]: E0213 18:52:34.590153 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:34.814607 kubelet[2403]: I0213 18:52:34.814547 2403 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 18:52:34.814607 kubelet[2403]: I0213 18:52:34.814594 2403 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 18:52:35.067718 kubelet[2403]: I0213 18:52:35.067390 2403 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rknk4" podStartSLOduration=23.772688157 podStartE2EDuration="32.067369286s" podCreationTimestamp="2025-02-13 18:52:03 +0000 UTC" firstStartedPulling="2025-02-13 18:52:25.722290803 +0000 UTC m=+23.778001936" lastFinishedPulling="2025-02-13 18:52:34.016971944 +0000 UTC m=+32.072683065" observedRunningTime="2025-02-13 18:52:35.066959799 +0000 UTC m=+33.122670956" watchObservedRunningTime="2025-02-13 18:52:35.067369286 +0000 UTC m=+33.123080431" Feb 13 18:52:35.591139 kubelet[2403]: E0213 18:52:35.591052 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:36.591284 kubelet[2403]: E0213 18:52:36.591209 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:36.817647 systemd[1]: Created slice kubepods-besteffort-pod38b1b13a_7f8f_40c7_b1f4_e65b110bdb26.slice - libcontainer container kubepods-besteffort-pod38b1b13a_7f8f_40c7_b1f4_e65b110bdb26.slice. Feb 13 18:52:36.907409 kubelet[2403]: I0213 18:52:36.907337 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/38b1b13a-7f8f-40c7-b1f4-e65b110bdb26-data\") pod \"nfs-server-provisioner-0\" (UID: \"38b1b13a-7f8f-40c7-b1f4-e65b110bdb26\") " pod="default/nfs-server-provisioner-0" Feb 13 18:52:36.907603 kubelet[2403]: I0213 18:52:36.907422 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q5kv\" (UniqueName: \"kubernetes.io/projected/38b1b13a-7f8f-40c7-b1f4-e65b110bdb26-kube-api-access-2q5kv\") pod \"nfs-server-provisioner-0\" (UID: \"38b1b13a-7f8f-40c7-b1f4-e65b110bdb26\") " pod="default/nfs-server-provisioner-0" Feb 13 18:52:37.127029 containerd[1954]: time="2025-02-13T18:52:37.126578684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:38b1b13a-7f8f-40c7-b1f4-e65b110bdb26,Namespace:default,Attempt:0,}" Feb 13 18:52:37.386958 systemd-networkd[1860]: cali60e51b789ff: Link UP Feb 13 18:52:37.387382 systemd-networkd[1860]: cali60e51b789ff: Gained carrier Feb 13 18:52:37.392459 (udev-worker)[3846]: Network interface NamePolicy= disabled on kernel command line. Feb 13 18:52:37.412696 containerd[1954]: 2025-02-13 18:52:37.224 [INFO][3827] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.25.248-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 38b1b13a-7f8f-40c7-b1f4-e65b110bdb26 1202 0 2025-02-13 18:52:36 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.25.248 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.248-k8s-nfs--server--provisioner--0-" Feb 13 18:52:37.412696 containerd[1954]: 2025-02-13 18:52:37.225 [INFO][3827] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.248-k8s-nfs--server--provisioner--0-eth0" Feb 13 18:52:37.412696 containerd[1954]: 2025-02-13 18:52:37.296 [INFO][3838] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4" HandleID="k8s-pod-network.940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4" Workload="172.31.25.248-k8s-nfs--server--provisioner--0-eth0" Feb 13 18:52:37.412696 containerd[1954]: 2025-02-13 18:52:37.316 [INFO][3838] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4" HandleID="k8s-pod-network.940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4" Workload="172.31.25.248-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003324c0), Attrs:map[string]string{"namespace":"default", "node":"172.31.25.248", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 18:52:37.296150312 +0000 UTC"}, Hostname:"172.31.25.248", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 18:52:37.412696 containerd[1954]: 2025-02-13 18:52:37.317 [INFO][3838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 18:52:37.412696 containerd[1954]: 2025-02-13 18:52:37.317 [INFO][3838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 18:52:37.412696 containerd[1954]: 2025-02-13 18:52:37.317 [INFO][3838] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.25.248' Feb 13 18:52:37.412696 containerd[1954]: 2025-02-13 18:52:37.322 [INFO][3838] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4" host="172.31.25.248" Feb 13 18:52:37.412696 containerd[1954]: 2025-02-13 18:52:37.332 [INFO][3838] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.25.248" Feb 13 18:52:37.412696 containerd[1954]: 2025-02-13 18:52:37.343 [INFO][3838] ipam/ipam.go 489: Trying affinity for 192.168.92.192/26 host="172.31.25.248" Feb 13 18:52:37.412696 containerd[1954]: 2025-02-13 18:52:37.348 [INFO][3838] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.192/26 host="172.31.25.248" Feb 13 18:52:37.412696 containerd[1954]: 2025-02-13 18:52:37.352 [INFO][3838] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.192/26 host="172.31.25.248" Feb 13 18:52:37.412696 containerd[1954]: 2025-02-13 18:52:37.352 [INFO][3838] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.192/26 handle="k8s-pod-network.940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4" host="172.31.25.248" Feb 13 18:52:37.412696 containerd[1954]: 2025-02-13 18:52:37.355 [INFO][3838] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4 Feb 13 18:52:37.412696 containerd[1954]: 2025-02-13 18:52:37.365 [INFO][3838] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.192/26 handle="k8s-pod-network.940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4" host="172.31.25.248" Feb 13 18:52:37.412696 containerd[1954]: 2025-02-13 18:52:37.378 [INFO][3838] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.195/26] block=192.168.92.192/26 handle="k8s-pod-network.940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4" host="172.31.25.248" Feb 13 18:52:37.412696 containerd[1954]: 2025-02-13 18:52:37.379 [INFO][3838] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.195/26] handle="k8s-pod-network.940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4" host="172.31.25.248" Feb 13 18:52:37.412696 containerd[1954]: 2025-02-13 18:52:37.379 [INFO][3838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 18:52:37.412696 containerd[1954]: 2025-02-13 18:52:37.379 [INFO][3838] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.195/26] IPv6=[] ContainerID="940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4" HandleID="k8s-pod-network.940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4" Workload="172.31.25.248-k8s-nfs--server--provisioner--0-eth0" Feb 13 18:52:37.418420 containerd[1954]: 2025-02-13 18:52:37.381 [INFO][3827] cni-plugin/k8s.go 386: Populated endpoint ContainerID="940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.248-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.248-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"38b1b13a-7f8f-40c7-b1f4-e65b110bdb26", ResourceVersion:"1202", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 18, 52, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.248", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.92.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 18:52:37.418420 containerd[1954]: 2025-02-13 18:52:37.382 [INFO][3827] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.195/32] ContainerID="940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.248-k8s-nfs--server--provisioner--0-eth0" Feb 13 18:52:37.418420 containerd[1954]: 2025-02-13 18:52:37.382 [INFO][3827] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.248-k8s-nfs--server--provisioner--0-eth0" Feb 13 18:52:37.418420 containerd[1954]: 2025-02-13 18:52:37.386 [INFO][3827] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.248-k8s-nfs--server--provisioner--0-eth0" Feb 13 18:52:37.418769 containerd[1954]: 2025-02-13 18:52:37.389 [INFO][3827] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.248-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.248-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"38b1b13a-7f8f-40c7-b1f4-e65b110bdb26", ResourceVersion:"1202", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 18, 52, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.248", ContainerID:"940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.92.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"42:8d:e2:31:ba:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 18:52:37.418769 containerd[1954]: 2025-02-13 18:52:37.410 [INFO][3827] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.248-k8s-nfs--server--provisioner--0-eth0" Feb 13 18:52:37.453109 containerd[1954]: time="2025-02-13T18:52:37.452930711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:52:37.453543 containerd[1954]: time="2025-02-13T18:52:37.453142561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:52:37.453543 containerd[1954]: time="2025-02-13T18:52:37.453229998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:52:37.453709 containerd[1954]: time="2025-02-13T18:52:37.453497740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:52:37.501439 systemd[1]: Started cri-containerd-940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4.scope - libcontainer container 940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4. Feb 13 18:52:37.567468 containerd[1954]: time="2025-02-13T18:52:37.567408332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:38b1b13a-7f8f-40c7-b1f4-e65b110bdb26,Namespace:default,Attempt:0,} returns sandbox id \"940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4\"" Feb 13 18:52:37.570346 containerd[1954]: time="2025-02-13T18:52:37.570253117Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 18:52:37.591537 kubelet[2403]: E0213 18:52:37.591476 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:38.594200 kubelet[2403]: E0213 18:52:38.592167 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:38.699407 systemd-networkd[1860]: cali60e51b789ff: Gained IPv6LL Feb 13 18:52:39.595641 kubelet[2403]: E0213 18:52:39.595562 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:39.623175 update_engine[1931]: I20250213 18:52:39.622244 1931 update_attempter.cc:509] Updating boot flags... Feb 13 18:52:39.752690 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3924) Feb 13 18:52:40.180286 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3928) Feb 13 18:52:40.596308 kubelet[2403]: E0213 18:52:40.596151 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:40.712683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2956280772.mount: Deactivated successfully. Feb 13 18:52:40.918785 ntpd[1926]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 18:52:40.919347 ntpd[1926]: 13 Feb 18:52:40 ntpd[1926]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 18:52:41.597505 kubelet[2403]: E0213 18:52:41.597282 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:42.598427 kubelet[2403]: E0213 18:52:42.598375 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:43.505412 containerd[1954]: time="2025-02-13T18:52:43.505334830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:43.509216 containerd[1954]: time="2025-02-13T18:52:43.509130775Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Feb 13 18:52:43.512837 containerd[1954]: time="2025-02-13T18:52:43.512745047Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:43.528171 containerd[1954]: time="2025-02-13T18:52:43.528081798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:52:43.534316 containerd[1954]: time="2025-02-13T18:52:43.534044580Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.963729345s" Feb 13 18:52:43.534316 containerd[1954]: time="2025-02-13T18:52:43.534154445Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 13 18:52:43.538727 containerd[1954]: time="2025-02-13T18:52:43.538554408Z" level=info msg="CreateContainer within sandbox \"940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 18:52:43.559741 kubelet[2403]: E0213 18:52:43.559529 2403 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:43.562182 containerd[1954]: time="2025-02-13T18:52:43.562071823Z" level=info msg="CreateContainer within sandbox \"940d89783333ef1772a5382420ff5c6ba31170b7a91b5fe46905d028d256fed4\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4b3763a7c6c46fd6ee9b751916946c1b52a4203f89883030638d93563dc63ca8\"" Feb 13 18:52:43.563776 containerd[1954]: time="2025-02-13T18:52:43.563705814Z" level=info msg="StartContainer for \"4b3763a7c6c46fd6ee9b751916946c1b52a4203f89883030638d93563dc63ca8\"" Feb 13 18:52:43.599290 kubelet[2403]: E0213 18:52:43.599195 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:43.625385 systemd[1]: Started cri-containerd-4b3763a7c6c46fd6ee9b751916946c1b52a4203f89883030638d93563dc63ca8.scope - libcontainer container 4b3763a7c6c46fd6ee9b751916946c1b52a4203f89883030638d93563dc63ca8. Feb 13 18:52:43.675235 containerd[1954]: time="2025-02-13T18:52:43.674745055Z" level=info msg="StartContainer for \"4b3763a7c6c46fd6ee9b751916946c1b52a4203f89883030638d93563dc63ca8\" returns successfully" Feb 13 18:52:44.095775 kubelet[2403]: I0213 18:52:44.095704 2403 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.129027036 podStartE2EDuration="8.095684285s" podCreationTimestamp="2025-02-13 18:52:36 +0000 UTC" firstStartedPulling="2025-02-13 18:52:37.569569831 +0000 UTC m=+35.625280964" lastFinishedPulling="2025-02-13 18:52:43.536227092 +0000 UTC m=+41.591938213" observedRunningTime="2025-02-13 18:52:44.095601011 +0000 UTC m=+42.151312155" watchObservedRunningTime="2025-02-13 18:52:44.095684285 +0000 UTC m=+42.151395418" Feb 13 18:52:44.600366 kubelet[2403]: E0213 18:52:44.600297 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:45.600956 kubelet[2403]: E0213 18:52:45.600884 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:46.601572 kubelet[2403]: E0213 18:52:46.601510 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:47.602231 kubelet[2403]: E0213 18:52:47.602179 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:48.603072 kubelet[2403]: E0213 18:52:48.602996 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:49.603971 kubelet[2403]: E0213 18:52:49.603905 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:50.604170 kubelet[2403]: E0213 18:52:50.604069 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:51.605365 kubelet[2403]: E0213 18:52:51.605296 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:52.605888 kubelet[2403]: E0213 18:52:52.605825 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:53.606360 kubelet[2403]: E0213 18:52:53.606262 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:54.607152 kubelet[2403]: E0213 18:52:54.607038 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:55.607608 kubelet[2403]: E0213 18:52:55.607543 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:56.608018 kubelet[2403]: E0213 18:52:56.607964 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:57.608874 kubelet[2403]: E0213 18:52:57.608798 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:58.609796 kubelet[2403]: E0213 18:52:58.609730 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:59.610317 kubelet[2403]: E0213 18:52:59.610255 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:00.610869 kubelet[2403]: E0213 18:53:00.610806 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:01.611413 kubelet[2403]: E0213 18:53:01.611340 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:02.612154 kubelet[2403]: E0213 18:53:02.612070 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:03.559598 kubelet[2403]: E0213 18:53:03.559539 2403 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:03.609163 containerd[1954]: time="2025-02-13T18:53:03.608830673Z" level=info msg="StopPodSandbox for \"8bc8827d0986f988a4cf7af7754474aa987105dc08d7a6da8aba2a45c6adf849\"" Feb 13 18:53:03.609163 containerd[1954]: time="2025-02-13T18:53:03.608997390Z" level=info msg="TearDown network for sandbox \"8bc8827d0986f988a4cf7af7754474aa987105dc08d7a6da8aba2a45c6adf849\" successfully" Feb 13 18:53:03.609163 containerd[1954]: time="2025-02-13T18:53:03.609019830Z" level=info msg="StopPodSandbox for \"8bc8827d0986f988a4cf7af7754474aa987105dc08d7a6da8aba2a45c6adf849\" returns successfully" Feb 13 18:53:03.610598 containerd[1954]: time="2025-02-13T18:53:03.609678961Z" level=info msg="RemovePodSandbox for \"8bc8827d0986f988a4cf7af7754474aa987105dc08d7a6da8aba2a45c6adf849\"" Feb 13 18:53:03.610598 containerd[1954]: time="2025-02-13T18:53:03.609722415Z" level=info msg="Forcibly stopping sandbox \"8bc8827d0986f988a4cf7af7754474aa987105dc08d7a6da8aba2a45c6adf849\"" Feb 13 18:53:03.610598 containerd[1954]: time="2025-02-13T18:53:03.609843074Z" level=info msg="TearDown network for sandbox \"8bc8827d0986f988a4cf7af7754474aa987105dc08d7a6da8aba2a45c6adf849\" successfully" Feb 13 18:53:03.613336 kubelet[2403]: E0213 18:53:03.613064 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:03.616161 containerd[1954]: time="2025-02-13T18:53:03.615290162Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8bc8827d0986f988a4cf7af7754474aa987105dc08d7a6da8aba2a45c6adf849\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 18:53:03.616161 containerd[1954]: time="2025-02-13T18:53:03.615430743Z" level=info msg="RemovePodSandbox \"8bc8827d0986f988a4cf7af7754474aa987105dc08d7a6da8aba2a45c6adf849\" returns successfully" Feb 13 18:53:03.619418 containerd[1954]: time="2025-02-13T18:53:03.618582838Z" level=info msg="StopPodSandbox for \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\"" Feb 13 18:53:03.619418 containerd[1954]: time="2025-02-13T18:53:03.618807738Z" level=info msg="TearDown network for sandbox \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\" successfully" Feb 13 18:53:03.619418 containerd[1954]: time="2025-02-13T18:53:03.618835120Z" level=info msg="StopPodSandbox for \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\" returns successfully" Feb 13 18:53:03.620123 containerd[1954]: time="2025-02-13T18:53:03.620056001Z" level=info msg="RemovePodSandbox for \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\"" Feb 13 18:53:03.620221 containerd[1954]: time="2025-02-13T18:53:03.620136157Z" level=info msg="Forcibly stopping sandbox \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\"" Feb 13 18:53:03.620380 containerd[1954]: time="2025-02-13T18:53:03.620292163Z" level=info msg="TearDown network for sandbox \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\" successfully" Feb 13 18:53:03.627708 containerd[1954]: time="2025-02-13T18:53:03.626519976Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 18:53:03.627708 containerd[1954]: time="2025-02-13T18:53:03.627557445Z" level=info msg="RemovePodSandbox \"624ea982eb54e02955f774cd6d397f9a5c2199b29bf024e55c81272481cae8f9\" returns successfully" Feb 13 18:53:03.628773 containerd[1954]: time="2025-02-13T18:53:03.628481655Z" level=info msg="StopPodSandbox for \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\"" Feb 13 18:53:03.628773 containerd[1954]: time="2025-02-13T18:53:03.628650122Z" level=info msg="TearDown network for sandbox \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\" successfully" Feb 13 18:53:03.628773 containerd[1954]: time="2025-02-13T18:53:03.628672455Z" level=info msg="StopPodSandbox for \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\" returns successfully" Feb 13 18:53:03.629215 containerd[1954]: time="2025-02-13T18:53:03.629160503Z" level=info msg="RemovePodSandbox for \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\"" Feb 13 18:53:03.629303 containerd[1954]: time="2025-02-13T18:53:03.629210770Z" level=info msg="Forcibly stopping sandbox \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\"" Feb 13 18:53:03.629361 containerd[1954]: time="2025-02-13T18:53:03.629338338Z" level=info msg="TearDown network for sandbox \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\" successfully" Feb 13 18:53:03.632829 containerd[1954]: time="2025-02-13T18:53:03.632756976Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 18:53:03.632829 containerd[1954]: time="2025-02-13T18:53:03.632837288Z" level=info msg="RemovePodSandbox \"26f58a6546e8c52c2011fae32158d97b19929085fef320770174df3358f6f488\" returns successfully" Feb 13 18:53:03.633998 containerd[1954]: time="2025-02-13T18:53:03.633580748Z" level=info msg="StopPodSandbox for \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\"" Feb 13 18:53:03.633998 containerd[1954]: time="2025-02-13T18:53:03.633731716Z" level=info msg="TearDown network for sandbox \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\" successfully" Feb 13 18:53:03.633998 containerd[1954]: time="2025-02-13T18:53:03.633754181Z" level=info msg="StopPodSandbox for \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\" returns successfully" Feb 13 18:53:03.635142 containerd[1954]: time="2025-02-13T18:53:03.634798355Z" level=info msg="RemovePodSandbox for \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\"" Feb 13 18:53:03.635142 containerd[1954]: time="2025-02-13T18:53:03.634969845Z" level=info msg="Forcibly stopping sandbox \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\"" Feb 13 18:53:03.635312 containerd[1954]: time="2025-02-13T18:53:03.635197203Z" level=info msg="TearDown network for sandbox \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\" successfully" Feb 13 18:53:03.638457 containerd[1954]: time="2025-02-13T18:53:03.638387571Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 18:53:03.638586 containerd[1954]: time="2025-02-13T18:53:03.638467151Z" level=info msg="RemovePodSandbox \"b7f32438259f7b16f24cc3a70d39dc5f4cfbc17f42aa4fb124f052b3d621e837\" returns successfully" Feb 13 18:53:03.639310 containerd[1954]: time="2025-02-13T18:53:03.639010504Z" level=info msg="StopPodSandbox for \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\"" Feb 13 18:53:03.639310 containerd[1954]: time="2025-02-13T18:53:03.639187691Z" level=info msg="TearDown network for sandbox \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\" successfully" Feb 13 18:53:03.639310 containerd[1954]: time="2025-02-13T18:53:03.639210731Z" level=info msg="StopPodSandbox for \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\" returns successfully" Feb 13 18:53:03.640042 containerd[1954]: time="2025-02-13T18:53:03.639986227Z" level=info msg="RemovePodSandbox for \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\"" Feb 13 18:53:03.640190 containerd[1954]: time="2025-02-13T18:53:03.640039337Z" level=info msg="Forcibly stopping sandbox \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\"" Feb 13 18:53:03.640246 containerd[1954]: time="2025-02-13T18:53:03.640204002Z" level=info msg="TearDown network for sandbox \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\" successfully" Feb 13 18:53:03.643468 containerd[1954]: time="2025-02-13T18:53:03.643379546Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 18:53:03.643683 containerd[1954]: time="2025-02-13T18:53:03.643477981Z" level=info msg="RemovePodSandbox \"19eef6acc55f02cafbd0477d0f4952d0d77168d973ecfddbb2e379a6014cf272\" returns successfully" Feb 13 18:53:03.644474 containerd[1954]: time="2025-02-13T18:53:03.644196049Z" level=info msg="StopPodSandbox for \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\"" Feb 13 18:53:03.644474 containerd[1954]: time="2025-02-13T18:53:03.644347498Z" level=info msg="TearDown network for sandbox \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\" successfully" Feb 13 18:53:03.644474 containerd[1954]: time="2025-02-13T18:53:03.644368895Z" level=info msg="StopPodSandbox for \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\" returns successfully" Feb 13 18:53:03.645653 containerd[1954]: time="2025-02-13T18:53:03.644867114Z" level=info msg="RemovePodSandbox for \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\"" Feb 13 18:53:03.645653 containerd[1954]: time="2025-02-13T18:53:03.644910232Z" level=info msg="Forcibly stopping sandbox \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\"" Feb 13 18:53:03.645653 containerd[1954]: time="2025-02-13T18:53:03.645035366Z" level=info msg="TearDown network for sandbox \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\" successfully" Feb 13 18:53:03.648236 containerd[1954]: time="2025-02-13T18:53:03.648158435Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 18:53:03.648427 containerd[1954]: time="2025-02-13T18:53:03.648235173Z" level=info msg="RemovePodSandbox \"4665f935ba4ba3635104e161098fdf522399a4b1863ac16f18923676f4ef7286\" returns successfully" Feb 13 18:53:03.651150 containerd[1954]: time="2025-02-13T18:53:03.650495442Z" level=info msg="StopPodSandbox for \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\"" Feb 13 18:53:03.651150 containerd[1954]: time="2025-02-13T18:53:03.650693703Z" level=info msg="TearDown network for sandbox \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\" successfully" Feb 13 18:53:03.651150 containerd[1954]: time="2025-02-13T18:53:03.650715436Z" level=info msg="StopPodSandbox for \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\" returns successfully" Feb 13 18:53:03.651623 containerd[1954]: time="2025-02-13T18:53:03.651554956Z" level=info msg="RemovePodSandbox for \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\"" Feb 13 18:53:03.651623 containerd[1954]: time="2025-02-13T18:53:03.651605919Z" level=info msg="Forcibly stopping sandbox \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\"" Feb 13 18:53:03.651850 containerd[1954]: time="2025-02-13T18:53:03.651729781Z" level=info msg="TearDown network for sandbox \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\" successfully" Feb 13 18:53:03.655679 containerd[1954]: time="2025-02-13T18:53:03.655610703Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 18:53:03.656076 containerd[1954]: time="2025-02-13T18:53:03.655702553Z" level=info msg="RemovePodSandbox \"6d912628494ca560cddb7dd42d258309f894633b1b8d4a8b4bce107b5df4c913\" returns successfully" Feb 13 18:53:03.656418 containerd[1954]: time="2025-02-13T18:53:03.656382409Z" level=info msg="StopPodSandbox for \"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\"" Feb 13 18:53:03.656738 containerd[1954]: time="2025-02-13T18:53:03.656708874Z" level=info msg="TearDown network for sandbox \"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\" successfully" Feb 13 18:53:03.656963 containerd[1954]: time="2025-02-13T18:53:03.656844658Z" level=info msg="StopPodSandbox for \"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\" returns successfully" Feb 13 18:53:03.657561 containerd[1954]: time="2025-02-13T18:53:03.657508670Z" level=info msg="RemovePodSandbox for \"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\"" Feb 13 18:53:03.657686 containerd[1954]: time="2025-02-13T18:53:03.657559309Z" level=info msg="Forcibly stopping sandbox \"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\"" Feb 13 18:53:03.657745 containerd[1954]: time="2025-02-13T18:53:03.657690271Z" level=info msg="TearDown network for sandbox \"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\" successfully" Feb 13 18:53:03.661259 containerd[1954]: time="2025-02-13T18:53:03.661191272Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 18:53:03.661371 containerd[1954]: time="2025-02-13T18:53:03.661271931Z" level=info msg="RemovePodSandbox \"59e759ee5e38dfec83ce251f294ee7ddf192f4fa36a1a9e73c63bb032cb2d3dd\" returns successfully" Feb 13 18:53:03.662174 containerd[1954]: time="2025-02-13T18:53:03.661890054Z" level=info msg="StopPodSandbox for \"6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78\"" Feb 13 18:53:03.662174 containerd[1954]: time="2025-02-13T18:53:03.662050594Z" level=info msg="TearDown network for sandbox \"6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78\" successfully" Feb 13 18:53:03.662174 containerd[1954]: time="2025-02-13T18:53:03.662072987Z" level=info msg="StopPodSandbox for \"6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78\" returns successfully" Feb 13 18:53:03.662948 containerd[1954]: time="2025-02-13T18:53:03.662893136Z" level=info msg="RemovePodSandbox for \"6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78\"" Feb 13 18:53:03.663036 containerd[1954]: time="2025-02-13T18:53:03.662947025Z" level=info msg="Forcibly stopping sandbox \"6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78\"" Feb 13 18:53:03.663130 containerd[1954]: time="2025-02-13T18:53:03.663077028Z" level=info msg="TearDown network for sandbox \"6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78\" successfully" Feb 13 18:53:03.667111 containerd[1954]: time="2025-02-13T18:53:03.667025813Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 18:53:03.667223 containerd[1954]: time="2025-02-13T18:53:03.667135774Z" level=info msg="RemovePodSandbox \"6c24932f7f5ea9555e3ced664ffb387d376673ca685b642ec590a1999eaaea78\" returns successfully" Feb 13 18:53:04.613627 kubelet[2403]: E0213 18:53:04.613566 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:05.614607 kubelet[2403]: E0213 18:53:05.614540 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:06.615234 kubelet[2403]: E0213 18:53:06.615175 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:07.616073 kubelet[2403]: E0213 18:53:07.616001 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:08.284248 systemd[1]: Created slice kubepods-besteffort-pod4fd9c2b3_fa15_46ca_871f_9f2591caf74a.slice - libcontainer container kubepods-besteffort-pod4fd9c2b3_fa15_46ca_871f_9f2591caf74a.slice. Feb 13 18:53:08.302034 kubelet[2403]: I0213 18:53:08.301959 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-32be273c-22f4-40f4-8d24-12a19f6f2b5f\" (UniqueName: \"kubernetes.io/nfs/4fd9c2b3-fa15-46ca-871f-9f2591caf74a-pvc-32be273c-22f4-40f4-8d24-12a19f6f2b5f\") pod \"test-pod-1\" (UID: \"4fd9c2b3-fa15-46ca-871f-9f2591caf74a\") " pod="default/test-pod-1" Feb 13 18:53:08.302034 kubelet[2403]: I0213 18:53:08.302037 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxq5d\" (UniqueName: \"kubernetes.io/projected/4fd9c2b3-fa15-46ca-871f-9f2591caf74a-kube-api-access-nxq5d\") pod \"test-pod-1\" (UID: \"4fd9c2b3-fa15-46ca-871f-9f2591caf74a\") " pod="default/test-pod-1" Feb 13 18:53:08.439136 kernel: FS-Cache: Loaded Feb 13 18:53:08.481331 kernel: RPC: Registered named UNIX socket transport module. Feb 13 18:53:08.481451 kernel: RPC: Registered udp transport module. Feb 13 18:53:08.481490 kernel: RPC: Registered tcp transport module. Feb 13 18:53:08.483229 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 18:53:08.483292 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 18:53:08.617319 kubelet[2403]: E0213 18:53:08.617188 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:08.791778 kernel: NFS: Registering the id_resolver key type Feb 13 18:53:08.791985 kernel: Key type id_resolver registered Feb 13 18:53:08.792033 kernel: Key type id_legacy registered Feb 13 18:53:08.831498 nfsidmap[4251]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 18:53:08.837366 nfsidmap[4252]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 18:53:08.890597 containerd[1954]: time="2025-02-13T18:53:08.890431443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4fd9c2b3-fa15-46ca-871f-9f2591caf74a,Namespace:default,Attempt:0,}" Feb 13 18:53:09.084908 (udev-worker)[4236]: Network interface NamePolicy= disabled on kernel command line. Feb 13 18:53:09.086350 systemd-networkd[1860]: cali5ec59c6bf6e: Link UP Feb 13 18:53:09.087568 systemd-networkd[1860]: cali5ec59c6bf6e: Gained carrier Feb 13 18:53:09.111937 containerd[1954]: 2025-02-13 18:53:08.973 [INFO][4253] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.25.248-k8s-test--pod--1-eth0 default 4fd9c2b3-fa15-46ca-871f-9f2591caf74a 1323 0 2025-02-13 18:52:37 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.25.248 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.248-k8s-test--pod--1-" Feb 13 18:53:09.111937 containerd[1954]: 2025-02-13 18:53:08.973 [INFO][4253] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.248-k8s-test--pod--1-eth0" Feb 13 18:53:09.111937 containerd[1954]: 2025-02-13 18:53:09.018 [INFO][4264] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e" HandleID="k8s-pod-network.d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e" Workload="172.31.25.248-k8s-test--pod--1-eth0" Feb 13 18:53:09.111937 containerd[1954]: 2025-02-13 18:53:09.039 [INFO][4264] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e" HandleID="k8s-pod-network.d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e" Workload="172.31.25.248-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000220ae0), Attrs:map[string]string{"namespace":"default", "node":"172.31.25.248", "pod":"test-pod-1", "timestamp":"2025-02-13 18:53:09.018265019 +0000 UTC"}, Hostname:"172.31.25.248", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 18:53:09.111937 containerd[1954]: 2025-02-13 18:53:09.039 [INFO][4264] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 18:53:09.111937 containerd[1954]: 2025-02-13 18:53:09.039 [INFO][4264] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 18:53:09.111937 containerd[1954]: 2025-02-13 18:53:09.040 [INFO][4264] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.25.248' Feb 13 18:53:09.111937 containerd[1954]: 2025-02-13 18:53:09.043 [INFO][4264] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e" host="172.31.25.248" Feb 13 18:53:09.111937 containerd[1954]: 2025-02-13 18:53:09.048 [INFO][4264] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.25.248" Feb 13 18:53:09.111937 containerd[1954]: 2025-02-13 18:53:09.053 [INFO][4264] ipam/ipam.go 489: Trying affinity for 192.168.92.192/26 host="172.31.25.248" Feb 13 18:53:09.111937 containerd[1954]: 2025-02-13 18:53:09.056 [INFO][4264] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.192/26 host="172.31.25.248" Feb 13 18:53:09.111937 containerd[1954]: 2025-02-13 18:53:09.060 [INFO][4264] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.192/26 host="172.31.25.248" Feb 13 18:53:09.111937 containerd[1954]: 2025-02-13 18:53:09.060 [INFO][4264] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.192/26 handle="k8s-pod-network.d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e" host="172.31.25.248" Feb 13 18:53:09.111937 containerd[1954]: 2025-02-13 18:53:09.062 [INFO][4264] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e Feb 13 18:53:09.111937 containerd[1954]: 2025-02-13 18:53:09.068 [INFO][4264] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.192/26 handle="k8s-pod-network.d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e" host="172.31.25.248" Feb 13 18:53:09.111937 containerd[1954]: 2025-02-13 18:53:09.079 [INFO][4264] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.196/26] block=192.168.92.192/26 handle="k8s-pod-network.d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e" host="172.31.25.248" Feb 13 18:53:09.111937 containerd[1954]: 2025-02-13 18:53:09.079 [INFO][4264] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.196/26] handle="k8s-pod-network.d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e" host="172.31.25.248" Feb 13 18:53:09.111937 containerd[1954]: 2025-02-13 18:53:09.079 [INFO][4264] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 18:53:09.111937 containerd[1954]: 2025-02-13 18:53:09.079 [INFO][4264] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.196/26] IPv6=[] ContainerID="d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e" HandleID="k8s-pod-network.d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e" Workload="172.31.25.248-k8s-test--pod--1-eth0" Feb 13 18:53:09.111937 containerd[1954]: 2025-02-13 18:53:09.081 [INFO][4253] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.248-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.248-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4fd9c2b3-fa15-46ca-871f-9f2591caf74a", ResourceVersion:"1323", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 18, 52, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.248", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.92.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 18:53:09.114796 containerd[1954]: 2025-02-13 18:53:09.082 [INFO][4253] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.196/32] ContainerID="d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.248-k8s-test--pod--1-eth0" Feb 13 18:53:09.114796 containerd[1954]: 2025-02-13 18:53:09.082 [INFO][4253] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.248-k8s-test--pod--1-eth0" Feb 13 18:53:09.114796 containerd[1954]: 2025-02-13 18:53:09.087 [INFO][4253] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.248-k8s-test--pod--1-eth0" Feb 13 18:53:09.114796 containerd[1954]: 2025-02-13 18:53:09.090 [INFO][4253] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.248-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.248-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4fd9c2b3-fa15-46ca-871f-9f2591caf74a", ResourceVersion:"1323", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 18, 52, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.248", ContainerID:"d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.92.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"ba:26:56:89:d5:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 18:53:09.114796 containerd[1954]: 2025-02-13 18:53:09.106 [INFO][4253] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.248-k8s-test--pod--1-eth0" Feb 13 18:53:09.152220 containerd[1954]: time="2025-02-13T18:53:09.151926503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:53:09.152220 containerd[1954]: time="2025-02-13T18:53:09.152028968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:53:09.152220 containerd[1954]: time="2025-02-13T18:53:09.152064386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:53:09.153038 containerd[1954]: time="2025-02-13T18:53:09.152266881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:53:09.180442 systemd[1]: Started cri-containerd-d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e.scope - libcontainer container d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e. Feb 13 18:53:09.248240 containerd[1954]: time="2025-02-13T18:53:09.247947973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4fd9c2b3-fa15-46ca-871f-9f2591caf74a,Namespace:default,Attempt:0,} returns sandbox id \"d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e\"" Feb 13 18:53:09.250537 containerd[1954]: time="2025-02-13T18:53:09.250415426Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 18:53:09.617744 containerd[1954]: time="2025-02-13T18:53:09.617581920Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:53:09.618035 kubelet[2403]: E0213 18:53:09.618000 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:09.620588 containerd[1954]: time="2025-02-13T18:53:09.620504486Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 18:53:09.629758 containerd[1954]: time="2025-02-13T18:53:09.629693737Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 379.15313ms" Feb 13 18:53:09.630399 containerd[1954]: time="2025-02-13T18:53:09.629956670Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 18:53:09.637693 containerd[1954]: time="2025-02-13T18:53:09.637590455Z" level=info msg="CreateContainer within sandbox \"d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 18:53:09.667042 containerd[1954]: time="2025-02-13T18:53:09.666552703Z" level=info msg="CreateContainer within sandbox \"d21f2addc92acaf1571cced6e20d62ff9b6aed23464787d7bf3886b4c488936e\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"5d9bad0c7f3d4fedbc5a91c6b17b256551cf47ceb6a6963edef51e591d83a83d\"" Feb 13 18:53:09.667699 containerd[1954]: time="2025-02-13T18:53:09.667636613Z" level=info msg="StartContainer for \"5d9bad0c7f3d4fedbc5a91c6b17b256551cf47ceb6a6963edef51e591d83a83d\"" Feb 13 18:53:09.724487 systemd[1]: Started cri-containerd-5d9bad0c7f3d4fedbc5a91c6b17b256551cf47ceb6a6963edef51e591d83a83d.scope - libcontainer container 5d9bad0c7f3d4fedbc5a91c6b17b256551cf47ceb6a6963edef51e591d83a83d. Feb 13 18:53:09.773708 containerd[1954]: time="2025-02-13T18:53:09.773628215Z" level=info msg="StartContainer for \"5d9bad0c7f3d4fedbc5a91c6b17b256551cf47ceb6a6963edef51e591d83a83d\" returns successfully" Feb 13 18:53:10.123528 systemd-networkd[1860]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 18:53:10.619988 kubelet[2403]: E0213 18:53:10.619910 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:11.621160 kubelet[2403]: E0213 18:53:11.621085 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:12.621742 kubelet[2403]: E0213 18:53:12.621676 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:12.918826 ntpd[1926]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 18:53:12.919362 ntpd[1926]: 13 Feb 18:53:12 ntpd[1926]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 18:53:13.622364 kubelet[2403]: E0213 18:53:13.622301 2403 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"